High-performance supercomputers can solve complex problems in climate science, aerospace design, biomedicine, and particle physics. But they are also used to develop new kinds of stealth technology, run complex ballistics models, and simulate nuclear weapon detonations.
Last month, China announced new restrictions on the export of both supercomputing technology and high-performance drones. The announcement came just a week after the Obama administration unveiled a new national initiative promising to deliver an exascale supercomputer 30 times more powerful than China’s Tianhe-2—currently the world’s fastest supercomputer—by 2025. And it comes just months after the U.S. Department of Commerce blocked the shipment of tens of thousands of American-made Intel chips bound for Chinese supercomputers, including Tianhe-2, citing security concerns.
In December 2010, the President’s Council of Advisors on Science and Technology (PCAST) delivered a 100-plus page series of recommendations to Congress and the executive branch. Titled “Designing a Digital Future,” the report outlines how the U.S. government could best leverage its resources to maintain its edge in IT infrastructure and computing. But amid the recommendations and research and development goals, the report offered a word of caution: Avoid an unproductive and potentially detrimental supercomputing “arms race” with China.
The most recent escalation in the U.S-China race for supercomputing supremacy is tied to an emerging set of 21st-century threats like cyberwarfare, global terrorism, and state-sponsored hacking. And, as PCAST warned five years ago, that could prove counterproductive for everyone.
The competition for supercomputing supremacy between the United States and China reaches back at least a decade, but that race has largely been a contest for national prestige. For China, an obsession with owning the world’s fastest supercomputer is driven by its desire to show the world its technological prowess, says James Andrew Lewis, a senior fellow and director of the Strategic Technologies Program at the Center for Strategic and International Studies. The underlying national security implications of supercomputing have remained largely in the background as the two countries have jockeyed for the top spot on the TOP500 list of the world’s fastest supercomputers.
By refusing to allow China to take delivery of American-made Intel processors, the U.S. has pushed those national security issues back to the fore. On paper, the Commerce Department denied Intel’s application for export on the grounds that Tianhe-2 and other Chinese supercomputers had been used for “nuclear explosive activities” that are “contrary to the national security or foreign policy of the United States.”
Particularly since the passing of the Nuclear Test Ban Treaty in 1996 prohibited live testing of nuclear weapons, such modeling has grown increasingly important. The U.S. Department of Energy owns four of the top 10 fastest supercomputers in the world, using them at least partially for this very kind of nuclear weapons modeling and research. But while concerns over China’s weapons modeling capability may be perfectly valid, previous boosts to China’s supercomputing power haven’t triggered a U.S. response like the one seen earlier this year.
So what’s changed?“
The Chinese have gotten better at computing for military purposes—military intelligence purposes—in the last year or two, and that’s probably of some concern,” Lewis says. “The U.S. after the Snowden revelations needs to rethink the way it does signals intelligence and that usually means bigger computers. So I think it’s those external events—in particular better Chinese performance—that’s driving some of this.”
Cyber-espionage and cyberwarfare have emerged as driving forces in security policy.
Just as nuclear weapons drove the development of supercomputing technology during the Cold War, threats like cyber-espionage and cyberwarfare have emerged as driving forces in security policy over the past several years. Militaries and intelligence agencies now have access to more data than ever before, more data to protect than ever before, and more potential adversaries trying to breach or attack their data networks than ever before. It’s a massive big data problem—one specially suited to bigger and faster supercomputing platforms.
“All of the signals intelligence agencies—the NSA, the GCHQ in the UK—are very, very interested in big data, because if they can crunch this data in a quick enough fashion, that enables them to begin to make connections between nodes in these networks,” says Tim Stevens, a teaching fellow in the war studies department at King’s College London. “And that of course is the dream of signals intelligence.”
There are two principal realms in which superior supercomputing could make a huge difference on the national security front, Stevens says. The first is counterrorism, or using big data analytics to sift through mountains of data and find signals in the noise, identifying patterns of behavior or connections between individuals and/or events that are relevant to national security. The other more important realm is cybersecurity, a realm in which many analysts believe the U.S. has already fallen behind.
“Being able to process network data in real near time to see where threats are coming from, to see what kinds of connections are being made by malicious nodes on the network, to see the spread of software or malware on those networks, and being able to model and interdict and track the dynamics on the network regarding things that national security agencies are interested in,” Stevens says, “those are the realms in which supercomputing has a real future.”