Sunday, September 8, 2024
HomeWorld NewsHow Might AI Affect the Rise and Fall of Nations?

How Might AI Affect the Rise and Fall of Nations?

Photo by agsandrew/Adobe Stock

Nations across the globe could see their power rise or fall depending on how they harness and manage the development of artificial intelligence (AI). Regardless of whether AI poses an existential risk to humanity, governments will need to develop new regulatory frameworks to identify, evaluate, and respond to the variety of AI-enabled challenges to come.

With the release of advanced forms of AI to the public early in 2023, public policy debates have rightly focused on such developments as the exacerbation of inequality, the loss of jobs, and the potential threat of human extinction if AI continues to evolve without effective guardrails. There has been less discussion about how AI might affect geopolitics and which actors might take the lead in the future development of AI or other advanced AI algorithms.

As AI continues to advance, geopolitics may never be the same. Humans organized in nation-states will have to work with another set of actors—AI-enabled machines—of equivalent or greater intelligence and, potentially, highly disruptive capabilities. In the age of geotechnopolitics, human identity and human perceptions of our roles in the world will be distinctly different; monumental scientific discoveries will emerge in ways that humans may not be able to comprehend. Consequently, the AI development path that ultimately unfolds will matter enormously for the shape and contours of the future world.

We outline several scenarios that illustrate how AI development could unfold in the near term, depending on who is in control. We held discussions with leading technologists, policymakers, and scholars spanning many sectors to generate our findings and recommendations. We presented these experts with the scenarios as a baseline to probe, reflect on, and critique. We sought to characterize the current trajectory of AI development and identify the most important factors for governing the evolution of this unprecedented technology.

Who could control the development of AI?

U.S. Companies Lead the Way

U.S. President Joe Biden appears virtually in a meeting with business and labor leaders about the Chips Act, which aims to spur U.S. domestic chip and semiconductor manufacturing

U.S. President Joe Biden appears virtually in a meeting with business and labor leaders about the CHIPS and Science Act, which aims to spur U.S. domestic chip and semiconductor manufacturing.

Photo by Jonathan Ernst/Reuters

The U.S. government continues to allow private corporations to develop AI without meaningfully regulating the technology or intervening in a way that changes those corporations’ behavior. This approach fits with the long-standing belief in the United States that the free market (and its profit-driven incentives) is the most effective mechanism to rapidly advance technologies like AI.[1] In this world, U.S. government personnel continue to lag behind engineers in the U.S. technology sector, both in their understanding of AI and in their ability to harness its power. Private corporations direct the investment of almost all research and development funding to improve AI, and the vast majority of U.S. technical talent continues to flock to Silicon Valley. The U.S. government seeks to achieve its policy goals by relying on the country’s innovators to develop new inventions that it could eventually purchase. In this world, the future relationship between the U.S. government and the technology sector looks much like the present: Companies engage in aggressive data-gathering of consumers, and social media continues to be a hotbed for disinformation and dissension.

U.S. Government Takeover

Top U.S. technology leaders including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna and former Microsoft CEO Bill Gates take their seats for the start of a bipartisan Artificial Intelligence (AI) Insight Forum, in September 2023.

Top U.S. technology leaders, including Tesla chief executive officer (CEO) Elon Musk, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, and former Microsoft CEO Bill Gates take their seats for the start of a bipartisan AI Insight Forum, in September 2023.

Photo by Leah Millis/Reuters

AI advances are proceeding at a rapid rate, and concerns about catastrophic consequences lead the U.S. government—potentially in coordination with like-minded allies—to seize control of AI development. The United States chooses to abandon its traditional light-handed approach to regulating information technology and software development and instead embarks on large-scale regulation and oversight. This results in a monopoly by the U.S. government and select partners (e.g., the United Kingdom) over AI computing resources, data centers, advanced algorithms, and talent through nationalization and comprehensive regulation. Similar to past U.S. government initiatives, such as the Apollo Program and the Manhattan Project, AI is developed under government authority rather than under the auspices of private companies. In the defense sector, this could lead to an arms race dynamic as other governments initiate AI development programs of their own for fear that they will be left behind in an AI-driven era. Across the instruments of power, such nationalization could also shift the balance of haves versus have-nots as other countries that fail to keep up with the transition see their economies suffer because of a lack of ability to develop AI and incorporate it into their workforces.

Chinese Surprise

A Baidu sign is seen at the 2023 World Artificial Intelligence Conference in Shanghai, China.

Photo by Aly Song/Reuters

Akin to a Sputnik moment, three Chinese organizations—Huawei, Baidu, and the Beijing Academy of Artificial Intelligence (BAAI)—announce a major AI breakthrough, taking the world by surprise. In this world, AI progress in China is initially overlooked and consistently downplayed by policymakers from advanced, democratic economies. Chinese companies, research institutes, and key government labs leapfrog ahead of foreign competitors, in part because of their improved ability to absorb vast amounts of government funding. State-of-the-art AI models also have been steadily advancing, leveraging a competitive data advantage from the country’s massive population. Finally, China’s military-civil fusion enables major actors across industry, academia, and government to share resources across a common AI infrastructure. BAAI benefits from strategic partnerships with industry leaders and access to significant computational resources, such as the Pengcheng Laboratory’s Pengcheng Cloudbrain-II. This combination of world-leading expertise and enormous computing power allows China to scale AI research at an unprecedented rate, leading to breakthroughs in transformative AI research that catch the world off guard. This leads to intense concerns from U.S. political and military leaders that China’s newfound AI capabilities will provide it with an asymmetric military advantage over the United States.

Great Power Public-Private Consortium

Canada's Prime Minister Justin Trudeau, Japan's Prime Minister Fumio Kishida, U.S. President Joe Biden, Germany's Chancellor Olaf Scholz, Indonesia's President Joko Widodo, Italy's Prime Minister Giorgia Meloni, Citigroup's CEO Jane Fraser and European Commission President Ursula confer during the 2023 G7 summit in Hiroshima, Japan

Canada’s Prime Minister Justin Trudeau, Japan’s Prime Minister Fumio Kishida, U.S. President Joe Biden, Germany’s Chancellor Olaf Scholz, Indonesia’s President Joko Widodo, Italy’s Prime Minister Giorgia Meloni, Citigroup’s CEO Jane Fraser, and European Commission President Ursula confer during the 2023 Group of Seven summit in Hiroshima, Japan.

Photo by Jonathan Ernst/Reuters

Across the world, robust partnerships among government, global industry, civil society organizations, academia, and research institutions support the rapid development and deployment of AI. These partnerships form a consortium that carries out multi-stakeholder project collaborations that access large-scale computational data and training, computing, and storage resources. Through funding from many governments, the consortium develops joint solutions and benchmarking efforts to evaluate, verify, and validate the trustworthiness, reliability, and robustness of AI systems. New and existing international government bodies, including the Abu Dhabi AI Council, rely on diverse participation and contributions to set standards for responsible AI use. The result is a healthy AI sector that supports economic growth that occurs concurrently with the development and evaluation of equitable, safe, and secure AI systems.

What have we learned?

Countries and companies will clash in new ways, and AI could become an actor, not just a factor

Countries and corporations have long competed for power. Big technology companies challenge governments in many of the old ways while adding new approaches to the mix. Similar to traditional multinational corporations, such companies reach across national boundaries; however, big tech companies also influence local communities in much more comprehensive and invasive ways because they touch consumers’ lives and gather data on their locations, activities, and habits. Big tech companies also affect national economies, domestic policy, and local politics in new ways because they influence the spread of information (and disinformation) and create new communities and subcultures. This has contributed to polarized populations in several countries and regime change in others. AI companies specifically enjoy access to vast investment funds and massive computing power, giving them additional advantages.

Although technology has often influenced geopolitics, the prospect of AI means that the technology itself could become a geopolitical actor. AI could have motives and objectives that differ considerably from those of governments and private companies. Humans’ inability to comprehend how AI “thinks” and our limited understanding of the second- and third-order effects of our commands or requests of AI are also very troubling. Humans have enough trouble interacting with one another. It remains to be seen how we will manage our relationships with one or more AIs.

We are entering an era of both enlightenment and chaos

The borderless nature of AI makes it hard to control or regulate. As computing power expands, models are optimized, and open-source frameworks mature, the ability to create highly impactful AI applications will become increasingly diffuse. In such a world, well-intentioned researchers and engineers will use this power to do wonderful things, ill-intentioned individuals will use it to do terrible things, and AIs could do both wonderful and terrible things. The net result is neither an unblemished era of enlightenment nor an unmitigated disaster, but a mix of both. Humanity will learn to muddle through and live with this game-changing technology, just as we have with so many other transformative technologies in the past.

The United States and China will lead in different ways

Although U.S. policymakers often worry that China will take the lead in a race for AI, the consensus from the experts with whom we consulted was that China is highly unlikely to produce a major new advance in AI because of U.S. superiority in empowering private sector innovation. China is unlikely to catch up to or surpass the United States in producing AI breakthroughs, but it could lead in different arenas. China’s demonstrated record of refining technological capabilities produced elsewhere and the Chinese Communist Party’s preferred approach of societal use cases mean that China could see greater success integrating today’s technologies into its workflows, weapons, and systems, whereas the United States could fall behind in end-to-end systems integration.

Technological innovation will continue to outpace traditional regulation

The U.S. government is already considered behind on understanding AI compared with leading private developers. This is due to several factors: the relative slowness of government policymaking compared with the fast pace of technology development, government’s inability to pay competitive salaries for scarce talent, and the lack of clarity on whether and how AI should be regulated at all, among others. Looking ahead, this dynamic is unlikely to change. Governments will continue playing catch-up with the private sector to understand and respond to the newest and most-capable AI developments.

The rapid pace and diffusion of advanced AI technology will make multilateral regulation difficult. AI lacks the chokepoints of traditional models of nonproliferation, such as the global nuclear nonproliferation regime, meaning that it will be comparatively difficult for governments to control AI using traditional regulatory techniques.

What should government policymakers do to protect humanity?

The potential dangers posed by AI are many. At the extreme, they include the threat of human extinction, which could come about by an AI-enabled catastrophe, such as a well-designed virus that spreads easily, evades detection, and destroys our civilization. Less dire, but considerably worrisome, is the threat to democratic governance if AIs gain power over people.[2] Even if AIs do not kill humans or overturn democracy, authoritarian regimes, terrorist groups, and organized crime groups could use AI to cause great harm by spreading disinformation and manipulating public opinion. Governments need to view the AI landscape as a regulatory training ground in preparation for the threats posed by even more-advanced AI capabilities, including the potential arrival of artificial general intelligence.

Governments should focus on strengthening resilience to AI threats

In addition to more-traditional regulatory practices, government policies on AI should focus on strategies of resilience to mitigate potential AI threats because strategies aimed solely at denial will not work. AI cannot be contained through regulation, so the best policy will aim to minimize the harm that AI might do. This will probably be most critical in biosecurity, [3] but harm reduction also includes countering cybersecurity threats, strengthening democratic resilience, and developing emergency response options for a wide variety of threats from state and sub- and non-state actors. Governments will either need to adopt entirely new capabilities to put this policy into action or expand existing agencies, such as the Cybersecurity and Infrastructure Security Agency. Governments should take a more comprehensive approach to regulation beyond hardware controls, which will not be enough to mitigate harms in the long run.

Governments should look beyond traditional regulatory techniques to influence AI developments

Unlike other potentially dangerous technologies, AI lacks obvious inputs that could be regulated and controlled. Data and computing power are widely available to companies large and small, and no single entity can reliably predict from where the next revolutionary AI advance might originate. Consequently, governments should consider expanding their toolboxes beyond traditional regulatory techniques. Two creative mechanisms could be for governments to invest in establishing robust, publicly owned data sets for AI research or issue challenge grants that encourage socially beneficial uses for AI. New techniques could also include creating uniform liability rules to clarify when developers will be liable for harms involving AI, requirements for how AI should be assessed, and controls on whether certain highly capable models can be proliferated. Ultimately, governments could buy a seat at the table by providing economic incentives to companies in exchange for more influence in ensuring that AI is used for the good of all.

Governments should continue support for innovation

U.S. superiority in AI is largely the result of its superiority in innovation. To maintain this lead, the U.S. government should continue to support innovation by funding national AI resources. Although AI development at the frontier is led by private-sector companies with vast computing resources, there is a belief among experts that the next AI breakthrough could stem from smaller models with novel architectures.[4] Academic institutions have led on many of the theoretical developments that made the existing generation of AI possible. Stimulating the academic community with modest resources could build off this legacy and result in significant AI improvements.

Governments should partner with the private sector to improve risk assessments

In light of the likely very widespread proliferation of advanced AI capabilities to private- and public-sector actors and well-resourced individuals, governments should work closely with leading private-sector entities to develop advanced forecasting tools, wargames, and strategic plans for dealing with what experts anticipate will be a wide variety of unexpected AI-enabled catastrophic events.

Notes

Funding for this work was provided by gifts from RAND supporters. This work was conducted by the Acquisition and Technology Policy Program of the RAND National Security Research Division.

This commentary is part of the RAND Corporation Expert insight series. RAND Expert Insights present perspectives on timely policy issues. All RAND Expert Insights undergo peer review to ensure high standards for quality and objectivity.

Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highest level of integrity and ethical behavior. To help ensure our research and analysis are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid both the appearance and reality of financial and other conflicts of interest through staff training, project screening, and a policy of mandatory disclosure; and pursue transparency in our research engagements through our commitment to the open publication of our research findings and recommendations, disclosure of the source of funding of published research, and policies to ensure intellectual independence. For more information, visit www.rand.org/about/research-integrity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments