Discover more from A Lily Bit
Emerging technologies offer new ways for world powers to kill each other faster and more efficiently
Report calls for arms control measures to mitigate risks of emerging military technologies such as AI, autonomous weapons, and hypersonic missiles
In today's rapidly advancing technological landscape, emerging technologies such as artificial intelligence, autonomous weapons systems, and hypersonic missiles have the potential to significantly alter the global security landscape. These technologies offer new capabilities and opportunities for military powers to gain battlefield advantages, but they also pose significant risks and unintended consequences. In this context, a report published by the Arms Control Association in February sheds light on the dangers of these emerging technologies and the urgent need for arms control measures to mitigate their potential negative impacts. This article will examine the key findings of the report and its implications for policymakers, defense officials, and the general public.
A Lily Bit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
The report, titled "Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability," examines the debate surrounding the use of these technologies for military purposes and their impact on strategic stability. While military powers have sought to exploit advanced technologies to gain battlefield advantages, the report warns that not enough attention has been paid to the potential dangers of these weapons. Some officials and analysts believe that these emerging technologies will revolutionize warfare, rendering obsolete the weapons and strategies of the past. However, the report stresses the need for policymakers, defense officials, diplomats, journalists, educators, and members of the public to gain a more profound understanding of the unintended and hazardous outcomes of these technologies before major powers move ahead with their weaponization.
Armaments that operate independently without "meaningful human control," commonly known as lethal autonomous weapons systems, are being developed by several nations, including China, Israel, Russia, South Korea, the United Kingdom, and the United States. According to the Campaign to Stop Killer Robots, these weapons pose significant risks and have generated alarm among diplomats, human rights campaigners, and arms control advocates. For example, the U.S. Air Force is currently developing the Skyborg Autonomous Control System, which can control multiple drone aircraft and coordinate their actions with minimal human oversight. However, the development of fully autonomous weapons in battle raises concerns about reduced human oversight of combat operations, potential violations of international law, and weakened barriers to nuclear war escalation, according to the report.
The use of increasingly advanced weapons by global superpowers has led to civilian suffering in countries such as Vietnam, Afghanistan, and Yemen, with many arguing that such weapons have caused more harm than good. On the other hand, some people believe that a country must keep up with other nations' military technology in order to defend itself. As AI technology continues to outperform humans in various domains, including speech recognition, armies are gradually incorporating algorithms into their operations. However, the question remains of how to prevent the creation of killer robots. One solution is to have a clearly understand of the potential dangers of AI.
“Although the rapid deployment of such systems appears highly desirable to many military officials, their development has generated considerable alarm among diplomats, human rights campaigners, arms control advocates, and others who fear that deploying fully autonomous weapons in battle would severely reduce human oversight of combat operations, possibly resulting in violations of international law, and could weaken barriers that restrain escalation from conventional to nuclear war.”—Arms Control Report
In the latter half of the 20th century, there were several nuclear close calls, some of which were due to misinterpretations, limitations, or technological failures. Although artificial intelligence (AI) is often considered immune to human fallibility, research suggests that such claims could have unforeseen and deadly consequences. A 2018 report by the Rand Corporation warned that an increased reliance on AI could lead to catastrophic mistakes due to pressure to use it before it's technologically mature, susceptibility to adversarial subversion, or the belief that the AI is more capable than it is, leading to fatal errors. Despite the Pentagon's adoption of five ethical principles for the use of AI in 2020, many ethicists argue that a total ban on lethal autonomous weapons systems is the only safe option.
“An increased reliance on AI could lead to new types of catastrophic mistakes. There may be pressure to use it before it is technologically mature; it may be susceptible to adversarial subversion; or adversaries may believe that the AI is more capable than it is, leading them to make catastrophic mistakes.”—2018 Rand Corporation Report
Meanwhile, hypersonic missiles, which can travel at Mach 5 or faster, are now part of the military arsenals of several countries, including the United States, China, and Russia. Last year, Russia admitted to deploying Kinzhal hypersonic missiles during its invasion of Ukraine, marking the first use of such weapons in combat. China has also tested multiple hypersonic missile variants using high-altitude balloons, and countries such as Australia, France, India, Japan, Germany, Iran, and North Korea are also developing hypersonic weapons.
The report also highlights the dangers of cyberwarfare and automated battlefield decision-making, and warns of their potential to escalate conflicts. Michael Klare, a board member at the Arms Control Association and the lead author of the report, cautions that major powers are rushing to weaponize advanced technologies without fully considering the consequences, including the risk of civilian casualties and accidental escalation of conflict. While the benefits of cutting-edge military technologies have been widely discussed, the risks have received less attention from the media and the U.S. Congress, Klare notes. The report emphasizes the need for bilateral and multilateral agreements between countries to address the risks associated with emerging technologies and minimize their dangers.
“While the media and the U.S. Congress have devoted much attention to the purported benefits of exploiting cutting-edge technologies for military use, far less has been said about the risks involved.”—Michael Klare
According to the report, it is crucial for major powers to engage in discussions about imposing binding restrictions on the military use of destabilizing technologies. The paper emphasizes that prohibiting attacks on nuclear command, control, communications, and intelligence (C3I) systems of another state, both through cyberspace means and hypersonic missile strikes, should be the top priority. In addition, measures should be implemented to prevent swarm attacks by autonomous weapons on another state's missile submarines, mobile intercontinental ballistic missiles (ICBMs), and other retaliatory systems. The report also calls for strict limitations on automated decision-support systems with the capacity to inform or initiate significant battlefield decisions, including human control over such devices. Without adopting measures such as these, cutting-edge technologies will continue to be converted into military systems at an ever-increasing pace, posing significant risks to world security. The report concludes that a better understanding of the unique threats to strategic stability posed by these technologies and the imposition of restraints on their military use can help reduce the risks of catastrophic consequences.
Following this development, a group of bipartisan lawmakers in the US have introduced a bill aimed at preventing artificial intelligence from launching nuclear weapons without meaningful human control. The Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—seeks to ensure that "any decision to launch a nuclear weapon should not be made" by AI. The proposed legislation would codify the existing policy of maintaining a human "in the loop" for all critical actions to execute decisions to initiate or terminate the use of nuclear weapons by the president. The bill aims to prevent the use of federal funds for launching nuclear weapons or selecting targets for their use. The move aims to ensure that humans alone hold the power to make life-or-death decisions for the use of deadly force, especially with regard to the most dangerous weapons.
“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited.”—Ken Buck
According to the recent AI Index Report published by the Stanford Institute for Human-Centered Artificial Intelligence, 36% of surveyed AI experts expressed concern over automated systems leading to catastrophic nuclear-level events. While the pace of AI's acceleration remains unclear, lawmakers believe that responsible foresight is necessary to protect future generations from devastating consequences. Although dozens of countries support the Treaty on the Prohibition of Nuclear Weapons, none of the world's nine nuclear powers, including the United States, have signed on.
If you found the information provided insightful please consider becoming a paid subscriber for more in-depth insights on various topics.
If you don’t want to commit to a paid subscription but still wish to support me, you can donate an amount you choose here. Thank you!
If you do not wish to make a contribution of any kind, please leave at least a like. It costs you nothing and helps others see this post.