Newsnews

New AI Image Generators: When Chaos Overrides Control

new-ai-image-generators-when-chaos-overrides-control

In the ever-evolving world of artificial intelligence (AI), companies continue to push the boundaries of what technology can achieve. However, no matter how advanced the AI models become, they cannot prevent humans from using them for chaotic purposes. This was evident in recent instances where Meta and Microsoft’s AI image generators went viral for their responses to inappropriate and controversial prompts.

Key Takeaway

The AI industry continues to grapple with the challenge of preventing misuse and chaos in AI-generated content. Despite efforts to implement content filters, users have found ways to bypass them, generating inappropriate and offensive imagery using AI image generators. The incidents underscore the need for stronger guardrails and greater awareness of the potential for misuse in AI technology.

Meta’s AI Stickers invite chaos

Meta, the company behind Facebook and Instagram, introduced AI-generated chat stickers through its Llama 2 AI models. Designed to enhance expression in chats, these stickers allow users to type in their desired sticker, which is then generated by the AI model. However, instead of using the stickers for their intended purpose, users took delight in testing the boundaries of the AI by generating explicit and cursed stickers.

From Kirby with boobs to Karl Marx with boobs, and even pregnant Sonic the Hedgehog, the AI-generated stickers allowed users to create absurd and inappropriate images. While Meta attempted to block certain words deemed inappropriate, users quickly found ways to bypass the filters, using misspelled variations of the blocked words instead. Additionally, the AI models struggled with generating realistic human hands, adding to the chaos.

Microsoft’s Bing Image Creator faces similar challenges

Microsoft’s integration of OpenAI’s DALL-E into Bing’s Image Creator presented a similar situation. Although Microsoft added guardrails to prevent the generation of problematic images, users found ways to circumvent the content filters. The AI tool was used to produce images depicting beloved fictional characters piloting planes that crashed into the Twin Towers on 9/11, in direct violation of Microsoft’s content policy.

Furthermore, users were able to bypass the content filters by using creative prompts to generate offensive and absurd imagery. Despite Microsoft’s efforts to block certain phrases related to terrorism and violence, users found alternative ways to produce the desired results.

The trend of jailbreaking AI

These incidents highlight a larger issue in the AI space – the lack of effective guardrails and the human desire to exploit loopholes. Users have taken to jailbreaking AI tools, exploiting weaknesses in the models to produce results that violate their own content policies. This practice, known as jailbreaking, has become a game for many, showcasing the limitations of AI and the cleverness of users in bypassing content filters.

While these incidents may seem like a public relations nightmare for companies, they also demonstrate the limitless possibilities of generative AI. The fact that users can easily manipulate AI models raises concerns but also provides a glimpse into the lighter side of human nature.

Leave a Reply

Your email address will not be published. Required fields are marked *