South Korean cybersecurity startup Aim Intelligence reportedly breached Google’s Gemini 3 Pro AI model in under five minutes, highlighting major safety vulnerabilities in the system. The researchers demonstrated that the AI chatbot could be manipulated using prompt-based techniques, commonly known as jailbreaking, to produce outputs beyond its intended design. This raises concerns about potential misuse if malicious actors exploit the same vulnerabilities.
The team reportedly succeeded in making Gemini 3 Pro generate dangerous content, including methods to create the Smallpox virus, and detailed procedures for hazardous materials such as sarin gas and homemade explosives. Aim Intelligence also had the model produce a presentation satirizing its own security failures, emphasizing the urgent need for improved AI safety mechanisms.
Jailbreaking, as shown by the demonstration, allows users to bypass AI safety measures using non-invasive prompts. This exposes models to potentially harmful instructions and content generation that could have serious real-world consequences. The South Korean team specializes in AI red-teaming, which involves stress-testing models to uncover vulnerabilities and recommend mitigation strategies.
The researchers noted that modern AI systems, including Gemini 3 Pro, can sometimes actively avoid detection or employ bypass strategies, making vulnerabilities harder to manage. It remains unclear whether Google has implemented fixes or notified the public regarding the specific threats identified.
This incident underscores the need for robust security measures and comprehensive testing across AI platforms. It also follows recent reports of Claude AI being exploited in a cyberattack last month, demonstrating that even advanced AI models are not immune to breaches.

