DeepMind’s Silence: How Openness in AI Research Is Fading


In the race to shape the future of artificial intelligence, openness appears to be losing ground to strategic restraint. Google DeepMind, once hailed for its prolific contributions to AI research, is tightening the flow of information. According to internal sources, the company has enacted stricter controls on what its scientists can publish, especially research that could either benefit rivals or reflect poorly on Google’s own models.

For a long time, DeepMind was known as a place where top-tier research was not only conducted but openly shared—its seminal contributions include the 2017 paper on transformers, the architecture at the heart of today’s large language models. But priorities have shifted. As one researcher bluntly put it, “I cannot imagine us putting out the transformer papers for general use now.”

The shift follows the 2023 merger of DeepMind and Google’s Brain team, a move partly driven by concern that Google was falling behind in the AI arms race. Since then, a more product-focused culture has emerged, emphasizing commercialization over academic prestige. This transition is manifesting in tighter publication policies. Select papers, particularly those tied to generative AI, now face a six-month embargo and a gauntlet of internal reviews. Some researchers describe the process as bureaucratic, others as outright stifling.

This change isn’t just theoretical—it’s having real consequences on careers and morale. For scientists whose professional growth depends on publication in peer-reviewed journals, the new policies are seen as a bottleneck. One former employee described it as “a career killer.”

Strategic gatekeeping appears to apply evenhandedly. DeepMind reportedly blocked a paper that exposed limitations in Google’s Gemini model, but also halted another that detailed security vulnerabilities in OpenAI’s systems, wary of escalating a public rivalry. Though the company insists it follows a responsible disclosure policy, insiders suggest caution now overrides transparency.

Internally, priorities are increasingly skewed toward improving the Gemini suite of AI products. Teams working on research not directly tied to product development face greater difficulty securing compute resources and datasets. The message from leadership is clear: DeepMind is a company, not a university lab.

In recent months, Google has seen its share price climb, aided in part by the rapid rollout of AI-enhanced products like Astra, a multimodal agent capable of interpreting audio, video, and text in real time. But the gains come with trade-offs. The tension between open science and corporate secrecy is growing, and the people who once flocked to DeepMind to push the frontiers of AI for the public good are now reconsidering whether that mission still aligns with reality.

For those watching the evolution of the AI industry, DeepMind’s pivot underscores a broader shift—one where cutting-edge research is no longer a shared public good, but a guarded corporate asset.