Google Launches $30K Bug Bounty Hunt for Gemini AI Exploits Hackers Wanted

Google launches AI Vulnerability Reward Program

Google launches AI Vulnerability Reward Program on October 6, 2025, offering up to $30,000 for finding critical security flaws in Gemini AI systems. Ethical hackers invited to strengthen AI security as Google integrates Gemini across products.

Key Highlights:

  • Google launches dedicated AI Vulnerability Reward Program on October 6, 2025
  • Rewards up to $30,000 for discovering high-impact Gemini AI exploits
  • Base reward of $20,000 for critical bugs affecting flagship products
  • Already paid $430,000 to researchers for AI vulnerabilities over two years
  • Focuses on prompt injection, data leakage, and malicious AI actions

Google formalizes AI security bounties with standalone program targeting exploits that enable unsafe AI actions across Search, Gmail, and Gemini apps

Google AI Bug Bounty Launch: $30K for Breaking Gemini

Google just opened its wallet to hackers worldwide with a revolutionary announcement. Google introduced a new AI Vulnerability Reward Program on October 6, 2025, inviting ethical hackers to discover critical security flaws in its Gemini AI systems. The program pays researchers up to $30,000 for identifying vulnerabilities, representing Google’s commitment to strengthening AI security.

The tech giant is offering bounties of as much as $30,000 for individuals who find severe security bugs affecting its AI systems, marking a sea change in how companies are thinking about protection against AI dangers.

Gemini Security Vulnerabilities: What Google’s Paying For

Google’s paying up to $20,000 for reports on exploits that leverage Gemini and have the biggest risk to users. Its top rewards are aimed at exploits that go beyond playful jailbreaks and into territory that threatens user data, account integrity — or even the integrity of Google’s AI systems themselves.

The program offers a base reward of $20,000 for discovering harmful actions in Google’s primary products, such as Search and Workspace applications like Gmail and Drive. Report quality and innovation bonuses could raise the total reward to $30,000.

AI Vulnerability Types: High-Impact Exploits Only

The curriculum focuses on AI-specific vulnerabilities, where model behavior can be exploited. These include prompt-injection chains that make Gemini leak sensitive write-ups or execute actions outside the scope of its intent; model or system prompt exfiltration that reveals proprietary defenses; and model manipulation by which outputs can be modified to support fraudulent operations or phishing efforts.

Security teams are concerned about indirect prompt injection — malicious instructions embedded in web pages or documents that a model reads — because it can subtly redirect outputs or steal data the user never meant to share.

Bug Bounty Scope: What Counts vs. What Doesn’t

Google has placed clear limits on what is a reportable vulnerability. Just getting an AI model like Gemini to hallucinate or give incorrect information is not a bug. “Funny” jailbreaks that simply deliver embarrassing answers, or harmless prompt wiggles that skirt style rules without doing any harm, won’t be eligible for top-tier rewards.

It evaluates a wide range of Google’s AI-powered products, from Search to Gemini apps, Gmail, Drive, NotebookLM, and even Jules, Google’s experimental AI assistant. Lower-tier rewards are still available for vulnerabilities in products like NotebookLM or Jules.

Ethical Hacker Guidelines: Testing Parameters

As with any coordinated vulnerability disclosure, there are rules: Test only with your own accounts. Don’t access real user data. Restrict experiments to the approved scope. The more detailed, reproducible, and low-effort to verify, with clear evidence of impact, the more a proposal tends to receive in rewards.

Submissions go through Google’s established bug hunter channels, and AI issues are triaged along with traditional security bugs. The program focuses on problems that can be verified and corrected, not speculative issues or content-policy debates.

Google AI Security Investment: $430K Already Paid

According to Google, ethical hackers have already pocketed more than $430,000 over the past two years by reporting AI-related security risks, and that was before this official program even existed. This new initiative simply formalizes the process and makes the rewards more transparent.

Google first added AI-related issues to its broader Vulnerability Reward Program in October 2023. The new dedicated AI Vulnerability Reward Program builds on these efforts with clearer rules and a focus on high-impact exploits.

Industry AI Security Trend: Following Competitors

Google is not the only one to formalize AI bug bounties. Other big vendors, such as Microsoft and OpenAI, have had programs to pay out for discoveries in AI assistants, plugins, and model integrations. The announcement came alongside CodeMender, Google’s new AI agent that can suggest patches for vulnerable code.

As AI becomes more integrated across consumer products, exploitation potential grows. Google hopes to pre-identify threats by discovering vulnerabilities before they’re utilized to attack people on a large real-world scale, making this program crucial for AI safety.

Ainewshub

Leave a Reply

Your email address will not be published. Required fields are marked *