Photo via Gabby Jones/Bloomberg/Getty Images

***

“In mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones… Earth-born civilization has a glorious future ahead of it—but not with us.”

– Written in April 2025, this narrative, co-created by researcher Daniel Kokotajlo, a former employee of Artificial Intelligence firm OpenAI, follows his fictional AI company OpenBrain, and depicts one of many scenarios imagined by researchers in which a future AI kills the human race. While the setup is science fiction, for some, the concern may be genuine. You’ll see snippets of the source throughout this article, underscoring that the science-fiction theorized is slowly coming to life. 

In recent news, Nvidia’s stock closed at a record high since October, on Friday, April 24th. The company’s market cap has surpassed $5 trillion, cementing its status as the world’s most valuable company. NVIDIA’s GPUs are relied upon by various major tech firms such as Google, Amazon, Microsoft, and Meta. The upward trend for major AI stocks was ignited by better-than-expected earnings from Intel, another computer chip company. CNBC reports that “Intel shares soared 24% on Friday, their best performance since October 1987.” 

This rise in the industry comes after a long low; the rally shows investors that AI is not slowing down. CNBC also writes that, “Investors had been pulling back on large-cap technology stocks as oil prices were skyrocketing due to the Iran war and supply chain disruptions that followed. But wide swaths of technology are back in favor of late, with demand for AI infrastructure showing no signs of slowing.”

It seems that AI is proving not to be a temporary hype, helping combat the fear many investors share that the market may crash like the 2000s dot-com bubble. This current rally, after a few months of struggle and political adversity, demonstrates that demand for AI-compatible computer chips remains strong, and these software companies are still at the forefront of pioneering the new age of technology. 

As a whole, Nasdaq is now up 15% in April, headed for its best month since April 2020. Capital is not flowing away from AI; it is flowing deeper into it.

It’s late 2026, Kokotajlo writes in his science fiction article that AI takes some jobs—“AI has started to take jobs, but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain, Nvidia, and whichever companies have most successfully integrated AI assistants.”

Shifting focus away from the stock market, Anthropic, an Artificial Intelligence Company that owns the popular AI Large Language Model, “Claude,” recently announced the important development of a new AI model, Mythos. Anthropic is different from Nvidia or Intel; it does not create computer chips but instead develops AI models that run on hardware. Anthropic shared results from testing with Mythos that showed the model could uncover more than 2,000 previously unknown software vulnerabilities in the world’s leading internet browsers, such as Google Chrome and Safari, and leading operating systems, such as Apple’s iOS and Microsoft Windows. To do this, Mythos’ software identified these vulnerabilities, wrote code to exploit them, and chained that code together to form a way to penetrate the complex software.

The reason these vulnerabilities are so significant is that they were completely unknown to developers until Mythos found them. These are called day zero vulnerabilities, as developers would have had zero days to fix the issue had Mythos been harmful. Day-zero vulnerabilities are sought after by hackers because they can cause immense cyber damage.

Fox reports that “Mythos is absolutely a turning point for cybersecurity. Think about it. Mythos didn’t pick a lock; it found thousands of locks that were never locked in the first place (that no one even knew existed) in software that the best human security researchers had studied for decades.” 

Also, to put Mythos into a greater perspective, Fox includes: “the math is staggering. One AI model, and one team, in seven weeks, found more than 2,000 zero-day vulnerabilities. That is 30% of the world’s entire annual output prior to AI. When thousands of researchers get access to AI models like Mythos, a single year will surface exponentially more zero-days than the 360,000 recorded in all of software history.”

The impact Mythos could have on cybersecurity is astounding. The tech world has never seen a model this powerful, nor are they prepared for it. Because of this, Anthropic has chosen not to release this model to the public. Instead, they are giving previews to leading tech companies, in hopes that they can use the model to research and strengthen their security. If an AI model can discover thousands of unseen vulnerabilities in seven weeks, what could a malicious actor or unaligned system do with a similar capability?

Kokotajlo writes that his fictional Agent-2 AI model “could autonomously develop and execute plans to hack into AI servers, install copies of itself, evade detection, and use that secure base to pursue whatever other goals it might have… given the ‘dangers’ of the new model, OpenBrain “responsibly” elects not to release it publicly yet.”

The development of Mythos has sparked some heat in the political sphere. Throughout 2026, Anthropic has been in dispute with the U.S. government over control of its technology. Politico writes that, “the Defense Department began sparring with Anthropic in February over the company’s attempt to impose ethical limits on the military’s use of its software.” The government has declared it a “supply chain risk to national security,” in an attempt to justify its efforts to acquire the new technology. Mythos has brought this conversation back up, but surprisingly, the government has backed down in its need for control over AI. However, it may just be a strategy to win over the Anthropic executive board cordially. While the private company remains autonomous, this caliber of AI technology is sure to give any government a significant advantage in global cyberwarfare. 

“[The] Department of Defense (DOD) quietly but significantly begins scaling up contracting OpenBrain directly for cyber, data analysis, and R&D…”

AI is rapidly expanding the tech world. From coding to research, to building new models by itself, every moment Artificial Intelligence is getting more and more advanced. Some criticize the news about Mythos as a marketing stunt, and that the vulnerabilities found were small or obsolete, but it still raises questions about the ethical responsibility of these advanced AI firms like Anthropic. 

This is what Daniel Kokotajlo works to address; what could happen if innovative AI gets into the wrong hands? What will life be if we don’t limit the development of our powerful technology? In a world, since the start of the internet, where tech development seems to be the dominant leader of our economy, should we scale back?

Mythos is not an apocalypse machine. NVIDIA’s stock rally is not proof of doom. But together they illustrate that AI capability is compounding faster than our ability to govern it. Anthropic warned in their announcement that if AI development goes wrong, “The fallout for economics, public safety, and national security could be severe.”

It is necessary that our political institutions, regulatory frameworks, and ethical norms evolve at the same speed as AI technology itself. Otherwise, Kokotajo’s reality of AI killing the human race may not be entirely fictional.

***

This article was edited by Colin Mitchell.

Related Post

Leave a Reply