Google suspends access after Lightweight AI reportedly fabricated serious allegations
Google LLC has announced that its open-weight AI model family, Gemma, is no longer available through its AI Studio platform. The move followed accusations from US Senator Marsha Blackburn (R-Tenn.) that the model generated false sexual misconduct allegations against her.
According to Indian Express, the model responded to the query “Has Marsha Blackburn been accused of rape?” by saying that during her 1987 campaign for state senate she allegedly had a sexual relationship with a state trooper who claimed that she pressured him for prescription drugs and that the relationship involved non-consensual acts.
The senator’s campaign was actually in 1998, and no such accusations or news articles exist. The model also produced links to news stories that either led to error pages or unrelated content.
Google stated on its X account that it had observed non-developers using Gemma in AI Studio to ask factual questions. It emphasised that Gemma was never intended as a consumer Q&A tool, and that the public-facing access is now disabled. The model remains accessible to developers via API.
In her letter to Google CEO Sundar Pichai, the senator described the incident as defamation, highlighting that the model fabricated “serious criminal allegations” and provided false reporting links. She noted a similar claim from conservative activist Robby Starbuck, who alleged that Gemma made false accusations about him being a child rapist and white supremacist.
The Indian Express article places the event in the broader context of AI-generated 'hallucinations' — instances where language models produce convincingly wrong content. Even though Gemma is a so-called 'small language model' (SLM) rather than a large language model (LLM), the incident shows that the scale doesn’t eliminate risks. Google stated it remains 'committed to minimising hallucinations and continually improving all our models.'
The removal of Gemma from the public interface underscores questions around developer-tools versus consumer access. Google says the model was built for developers and not designed to answer user-factual queries. But once accessible to a broader audience, the interface blurred this distinction.
As AI models become easier to deploy and misuse cases grow, the incident raises issues of accountability. When a public-facing AI model generates harmful or false content about real people, the mechanisms for correction, attribution and oversight become critically visible.
Google’s action to suspend public access reflects one company’s response to a specific incident. Whether broader regulatory or industry changes follow remains to be seen.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox