Technology

Anthropic’s New Mythos AI Model Sets Off Global Alarms

Anthropic's New Mythos AI Model Sets Off Global Alarms

When Anthropic told the world this month that it had built an artificial intelligence model so powerful that it was too dangerous to release widely, the company named 11 organizations as partners to help mount a defense.

All were from the United States.

Within two weeks, the model, called Mythos, had set off a global scramble unlike anything yet seen in the AI ​​era. Mythos, which Anthropic has said is uncannily capable of finding and exploiting hidden flaws in the software that runs the world’s banks, power grids and governments, had become a geopolitical chip — and a US company held it.

World leaders have struggled to figure out the scale of the security risks and how to fix them, with Anthropic sharing Mythos with only Britain outside the United States. The Bank of England governor warned publicly that Anthropic may have found a way to “crack the whole cyber-risk world open.” The European Central Bank began quietly questioning banks about their defenses. Canada’s finance minister compared the threat to the closure of the Strait of Hormuz.

For US rivals like China and Russia, Mythos underlined the security consequences of falling behind in the AI ​​race. One Russian pro-Kremlin outlet called the model “worse than a nuclear bomb.”

The responses illustrated a reality that AI researchers have long warned about mostly in theoretical terms: Whoever leads in building the most powerful AI models will gain outsize geopolitical advantages. Major AI breakthroughs are beginning to function less like product launches and more like weapons tests, and most nations want to understand how the technologies work and what protections are needed.

As foundational AI “models become more consequential, access becomes more geopolitical,” said Eduardo Levy Yeyati, a former chief economist at the Central Bank of Argentina and a regional adviser on growth and AI at the Inter-American Development Bank. “I would take this episode as a policy wake-up call. Governments can no longer ignore the issue.”

Even the US government, which has been embroiled in a clash with Anthropic over the use of AI in warfare, has taken notice of Mythos. On Friday, Dario Amodei, Anthropic’s chief executive, met with White House officials after some in the Trump administration noted the potential for the new model to wreak havoc on computer systems.

Anthropic, which is based in San Francisco, told The New York Times that it was keeping access to Mythos small because of safety and security concerns. It has focused on sharing the model with more than 40 organizations that provide technology used in maintaining critical global infrastructure like the internet or electricity grids. Anthropic named 11 of the organizations, including Amazon, Apple and Microsoft, that pledged to help develop security fixes for vulnerabilities identified by the model.

The company said that it had no immediate timeline for widely expanding access, but that it would work with the US government and industry partners to determine next steps. It said that it had been bombarded by calls from governments, companies and other organizations seeking access and information, but that these organizations could have varying levels of expertise to safely evaluate such a powerful AI model.

Anthropic added that it expected other groups to release AI models with similar cyber capabilities more widely within at least 18 months, giving organizations limited time to make the necessary security fixes.

On Tuesday, Anthropic said it was investigating a report that unauthorized users gained access to a version of Mythos.

The scramble over Mythos comes at a moment of minimal international cooperation on AI Governments are viewing each other with suspicion as corporations race to outpace rivals. There is no equivalent of the Nuclear Nonproliferation Treaty, no shared inspections and no agreed-upon rules for how to handle something like Mythos.

When Anthropic announced the model, many experts praised the company’s caution in limiting who gets to try the model, but expressed concerns about the lack of international coordination to deal with the risk.

Britain was the only other nation to gain access. Its AI Security Institute, a government-backed organization, tested Mythos and published an independent evaluation last week, confirming that it could carry out complex cyberattacks that no previous AI model had completed.

“This represents a step up in AI cyber capabilities,” Kanishka Narayan, Britain’s AI minister, said last week on social media, saying the country was taking steps to protect “critical national infrastructure.”

Others got less information. The European Commission, the executive branch of the 27-nation European Union, has met with Anthropic at least three times since the Mythos release, an EU official said. But the company has not provided access to the model because the two sides have not agreed on how to share it with the commission, the official said.

In a statement, the commission said it was “assessing possible implications” of Mythos, which “exhibits unprecedented cyber capabilities.”

Claudia Plattner, the president of Germany’s cybersecurity agency, known as BSI, said it had not received access to Mythos, but she met with Anthropic employees in San Francisco recently for “meaningful insight” into how it works. The capabilities point to “a paradigm change in the nature of cyber threats,” Ms. Plattner said in a statement.

Among US rivals, the response has been more muted. Despite Anthropic’s recent clash with the Trump administration, Mr. Amodei has made clear that AI should be used to defend the United States and other democracies and defeat autocratic adversaries.

Neither Beijing nor Moscow has made a major public statement on Mythos. Inside China, researchers and the broader AI community have been watching closely, according to analysts studying the country’s tech community. Many of the country’s banks, energy companies and government agencies run on the same software in which Mythos found vulnerabilities — but for now, they have no seat at the table.

“For China I think this is the second wake-up call after ChatGPT,” said Matt Sheehan, a senior fellow at the Carnegie Endowment for International Peace. He added that a US policy to prevent China from obtaining the most sophisticated semiconductors for building advanced AI systems was helping to extend the US lead.

Some AI researchers in China have privately expressed concern that the country could fall further behind, missing out on advantages that come with building a foundational model first, said Jeffrey Ding, a professor of political science at George Washington University.

Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, said China was not familiar with the specifics of Mythos but supported a peaceful, secure and open cyberspace.

Mythos is the latest sign of a growing global AI divide. Nations without powerful computing infrastructure and AI models risk being left dependent on companies like Anthropic, Google and OpenAI while having little sway over how their products are designed and safeguarded, Mr. Yeyati said.

“The idea that access to frontier AI is something a company can unilaterally restrict, using criteria that are opaque and unappealable, should be a real concern,” he said.

#Anthropics #Mythos #Model #Sets #Global #Alarms

Leave a Reply