Leading industries and companies in the world of generative Artificial Intelligence (AI) are promising to use the endless benefits of technology to transform humanity for the better. From enhancing day-to-day customer and employee experiences to contributing toward groundbreaking healthcare discoveries to paving the way for sustainability and human rights, AI undoubtedly has the ability to accomplish this alleged transformation. In light of this, however, many policymakers, tech experts, journalists, and concerned public members are discussing the urgent risks posed by AI. Relevant figures such as OpenAI CEO Sam Altman and former Microsoft CEO Bill Gates even published a statement on the risk of AI that argues “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Statements like these might seem overblown to many who only use generative AI models such as ChatGPT for homework help, translating texts, or for a quick and clear explanation of a topic. However, it is becoming increasingly obvious that generative AI has the potential to fundamentally change society. But the question we must now ask is: is that change worth it? Just because we have the capacity to create these intricate AI models, should we?
“Generative AI-nxiety,” as some scholars refer to it, ranges from merely minor concerns to existential dread. Algorithmic bias and discrimination in AI output results, whether in simple ChatGPT prompts or job recruitment scenarios, has proven to be a real danger due to the discriminatory risks it poses for women and people of color. Other highlighted concerns include ethical doubts about the future of AI, as well as the very real possibility that many jobs will be lost to cheaper and more efficient AI models. Increasingly concerning is the threat AI poses in the spread of election misinformation, as illustrated by the recent trending fake Harris campaign ad that utilizes an AI-generated voice to display a misleading “pro-abortion” stance.
While the fears of AI wiping out humanity are likely unrealistic and overblown (sometimes to conspiratorial levels), several valid risks remain. In their Statement on AI Risk, the Center for AI Safety extensively outlined the possible disaster scenarios that AI could cause. The fact that many of the mentioned cases, ethical issues, and risks are eerily similar to those that leading nuclear weapon scientists and experts warn against is hard to ignore.
For context, the development of nuclear weapons has been a crucial component of U.S. national security since World War II. The Manhattan Project began in 1942 with the goal of developing and using the world’s first atomic bomb before Nazi Germany. Due to such rapid development of nuclear projects by the U.S. government, nuclear waste has accumulated at sites across the country. Failure to implement a permanent solution for nuclear waste storage and disposal has cost Americans well above $300 billion, a figure that does not include the continued spending of billions on nuclear plants and military nuclear forces. Of the 2018 Department of Energy budget, $12 billion was allocated for nuclear weapons programs, while about $6 billion was dedicated to addressing the legacy of waste from the Manhattan Project. The Congressional Budget Office report estimates $634 billion to be spent between 2021 and 2030, rounding out at just over $60 billion a year. Additionally, over the next decade, federal budgets for nuclear weapons programs and nuclear systems are projected to steadily rise from $42 billion to $69 billion annually.
Expert predictions put the potential fallout costs of AI up there with those of nuclear power. Venture capital firms have released several assessments and expert reports, pushing warnings that AI’s expected returns may not be as high as predicted—and may instead result in the loss of potentially billions of dollars. Unsurprisingly, many are asking if all the money and investments flooding into AI are worth it. Government programs and big tech giants are projected to invest $1 trillion dollars into AI infrastructure in the coming years, but new research from Goldman Sachs raises the questions as to whether this would be too much spending for too little benefit.
During the Manhattan Project, similar assessments and reports were published warning of the existential threats posed by nuclear weapons development. Not far into the development of the world’s first atomic weapon did leading scientists, experts, and policymakers start to realize the danger in what they were creating was far greater than they expected. In May of 1945, President Harry S. Truman, who ultimately gave the go-ahead to use the atomic bomb, formed an Interim Committee of top officials tasked with recommending the proper use of the bomb. Not only did the group debate whether the bomb should be used immediately in an effort to bring about the end of World War II, but they also discussed the post-war fate of nuclear energy. The committee generally agreed that domestic legislation and international agreements needed to be created to control its use and prevent an arms race. Even so, it is hard to imagine that anyone on that committee, or the leaders of the Manhattan Project, could have envisioned the true extent to which the environmental, political, and economic implications of nuclear power would impact foreign affairs in the coming decades. Had they known, would they have made the same decisions? Would they have been so eager to be the first to use such powerful technology? The atomic bombs used in Hiroshima and Nagasaki were the most destructive weapons used in the history of combat, and they did end the most catastrophic conflict in human history. But if they should have been used in the first place is a question that lingers into the present. The creators of advanced generative AI models must ask themselves: just because we can, does that mean we should?
Once again mirroring the consequences and development of nuclear weapons and energy, AI models and infrastructure have left detrimental impacts on an already strained environment. The training process for just a single AI model consumes thousands of megawatt hours of electricity. It emits hundreds of tons of carbon into the atmosphere, in addition to exacerbating already-limited freshwater resources. The International Energy Agency reports a tenfold increase in AI energy demand by 2026. In the U.S., AI is projected to account for 6% of the nation’s total energy consumption in the same year. Even further, AI’s environmental impacts have uneven consequences across different regions and communities, leaving the most vulnerable populations with the worst effects.
In an ironic sense, the threats of AI not only overlap with those of nuclear weapons, but also amplify the already existing dangers of atomic power. Dr. James Johnson, Senior Lecturer and Director of Strategic Studies in the Department of Politics and International Relations at the University of Aberdeen, warns of these threats in his book AI and the Bomb, where he applies Cold War-era thinking to highlight emerging AI’s overlapping and exacerbated security threats. He explains that future advances in AI may allow adversaries to specifically target nuclear assets by using AI cyber weapons. Additionally, reliance on AI algorithms could lead to the misinterpretation of an adversary’s signals, thus unnecessarily escalating a situation and potentially triggering a nuclear crisis.
For these reasons, not only could AI costs rack up a deficit of trillions of dollars and leave a negative impact on the environment, but it may also increase the tensions between nuclear-armed states. Are policymakers and world leaders ready to take on these risks, possibly leading to another cold war if not handled properly? Advances in AI could undermine mutually assured destruction and international stability if they are not taken as seriously as nuclear power.
In recent years, policymakers, experts, and researchers have taken the first steps to address the potential existential issue and legacy of AI technologies. In October of 2023, the United Nations created the high-level Advisory Board on Artificial Intelligence, composed of 32 experts from multiple disciplines, to globally coordinate AI governance, harness the technology for good, and address its risks. The UN does not overlook the positive benefits of AI, though, as it applies the technology in over 400 ways across the UN network. Member countries also recently adopted the Global Digital Compact during the 2024 Summit of the Future, a historic conference that took place in September of 2024 at the UN Headquarters in New York. This compact provides a comprehensive framework for global governance of digital technologies, and also calls for international cooperation to harness AI’s opportunities to advance human rights and sustainable development.
What the future of AI has in store for humanity is impossible to determine or predict. Still, we know that technologies will inevitably impact all aspects of society—for better or for worse. As tech giants and industry leaders continue to rapidly advance AI technology, it is crucial that they take into account the future generations of humanity and the unpredictable consequences of their actions.
***
This article was edited by Francesca Rosario Bolastig and Hannorah Ragusa.