- Home
- Andy Greenberg
Sandworm Page 12
Sandworm Read online
Page 12
No one in the security community could remember seeing a piece of malware that used four zero days in a single attack. Stuxnet, as Microsoft eventually dubbed the malware based on file names in its code, was easily the most sophisticated cyberattack ever seen in the wild.
By the end of that summer, Symantec’s researchers had assembled more pieces of the puzzle: They’d found that the malware had spread to thirty-eight thousand computers around the world but that twenty-two thousand of those infections were in Iran. And they’d determined that the malware interacted with Siemens’s STEP 7 software. That application was one form of the software that allows industrial control system operators to monitor and send commands to equipment. Somehow, the analysts determined, Stuxnet’s goal seemed to be linked to physical machines—and probably in Iran. It was only in September 2010 that the German researcher Ralph Langner dove into the minutiae of that Siemens-targeted code and came to the conclusion that Stuxnet’s goal was to destroy a very specific piece of equipment: nuclear enrichment centrifuges.
With that final discovery, the researchers could put together all of the links in Stuxnet’s intricate kill chain. First, the malware had been designed to jump across air gaps: Iran’s engineers had been careful enough to cut off Natanz’s network entirely from the internet. So, like a highly evolved parasite, the malware instead piggybacked on human connections, infecting and traveling on USB sticks. There it would lie dormant and unnoticed until one of the drives happened to be plugged into the enrichment facility’s isolated systems. (Siemens software engineers might have been the carriers for that malware, or the USB malware might have been more purposefully planted by a human spy working in Natanz.)
Once it had penetrated that air-gapped network, Stuxnet would unfold like a ship in a bottle, requiring no interaction with its creators. It would silently spread via its panoply of zero-day techniques, hunting for a computer running Siemens STEP 7 software. When it found one, it would lie in wait, then unleash its payload. Stuxnet would inject its commands into so-called programmable logic controllers, or PLCs—the small computers that attach to equipment and serve as the interfaces between physical machines and digital signals. Once infected, the centrifuge that a PLC controlled would violently tear itself apart. In a final touch of brilliance, the malware would, before its attack, pre-record feedback from the equipment. It would then play that recording to the plant’s operators while it committed its violence so that to an operator observing the Siemens display, nothing would appear amiss until it was far too late.
Stuxnet’s only flaw was that it was too effective. Among computer security researchers, it’s practically a maxim that worms spread beyond their creators’ control. This one was no exception. Stuxnet had propagated far beyond its Natanz target to infect computers in more than a hundred countries across the world. Other than in the centrifuge caverns of Natanz, those collateral infections hadn’t caused physical destruction. But they had blown the ultrasecret malware’s cover, along with an operation that had been millions of dollars and years in the making.
Once Stuxnet’s purpose became clear, the United States and Israel quickly became the prime suspects for its creation. (It would be two more years, however, before a front-page story in The New York Times confirmed the two countries’ involvement.)
When Stuxnet’s existence went public, the Obama administration held a series of tense meetings to decide how to proceed. Should they pull the plug on the program before it was definitively tied back to the United States? It was only a matter of time, they figured, before Iran’s engineers would learn the true source of their problems and patch their software vulnerabilities, shutting Stuxnet out for good.
Instead, the Americans and Israelis behind the worm decided they had nothing to lose. So in a go-for-broke initiative, they released another, final series of Stuxnet versions that were designed to be even more aggressive than the original. Before Iran’s engineers had repaired their vulnerabilities, the malware destroyed nearly a thousand more of their centrifuges, offering one last master class in cybersabotage.
* * *
■
Stuxnet would change the way the world saw state-sponsored hacking forever. Inside Natanz’s haywire centrifuges, the leading edge of cyberwarfare had taken a giant leap forward, from Russia’s now primitive-looking web disruptions of 2007 and 2008 to virtuosic, automated physical destruction.
Today, history is still weighing whether Bush’s and Obama’s executive decisions to carry out that cyberattack were worth their cost. According to some U.S. intelligence analysts, Stuxnet set back the Iranian nuclear program by a year or even two, giving the Obama administration crucial time to bring Iran to the bargaining table, culminating in a nuclear deal in 2015.
But in fact, those long-term wins against Natanz’s operation weren’t so definitive. Even in spite of its confusion and mangled centrifuges, the facility actually increased its rate of uranium enrichment over the course of 2010, at times progressing toward bomb-worthy material at a rate 50 percent faster than it had in 2008. Stuxnet might have, if anything, only slowed the acceleration of Ahmadinejad’s program.
And what was Stuxnet’s price? Most notably, it exposed to the world for the first time the full prowess and aggression of America’s—and to a lesser extent Israel’s—most elite state hackers. It also revealed to the American people something new about their government and its cybersecurity priorities. After all, the hackers who had dug up the four zero-day vulnerabilities used in Stuxnet hadn’t reported them to Microsoft so that they could be patched for other users. Instead, they had exploited them in secret and left Windows machines around the world vulnerable to the same techniques that had allowed them to infiltrate Natanz. When the NSA chose to let its Tailored Access Operations hackers abuse those software flaws, it prioritized military offense over civilian defense.
Who can say how many equally powerful zero days the U.S. government has squirreled away in its secret collection? Despite assurances from both the Obama and the Trump administrations that the U.S. government helps to patch more vulnerabilities than it hoards in secret, the specter of its hidden digital weapons cache has nonetheless haunted defenders in the cybersecurity community for years. (Just a few years later, in fact, that collection of zero days would backfire in an absurd, self-destructive fiasco.)
But in a broader and more abstract sense, Stuxnet also allowed the world to better imagine malware’s potential to wreak havoc. In darkened rooms all over the globe, state-sponsored hackers took notice of America’s creation, looked back at their own lackluster work, and determined that they would someday meet the new bar Stuxnet had set.
At the same time, political leaders and diplomats around the world recognized in Stuxnet the creation of a new norm, not only in its technical advancements, but in geopolitics. America had dared to use a form of weaponry no country had before. If that weapon were later turned on the United States or its allies, how could it object on principle?
Had physical destruction via code become an acceptable rule of the global game? Even the former NSA and CIA director Michael Hayden seemed shaken by the new precedent. “Somebody crossed the Rubicon,” Hayden said in an interview with The New York Times. The attack that the West’s prophets of cyberwar had always feared, one capable of shutting down or destroying physical equipment from anywhere in the world, had come to pass. And Americans had been the first to do it. “No matter what you think of the effects—and I think destroying a cascade of Iranian centrifuges is an unalloyed good—you can’t help but describe it as an attack on critical infrastructure,” Hayden concluded.
Stuxnet was no “cyber 9/11” or “electronic Pearl Harbor.” It was a highly targeted operation whose damage was precisely limited to its pinpoint victim even when the worm spread out of its creators’ control. But the fact remained: In an attempt to prevent Iran from joining the nuclear arms race America had itself started with the bombings of
Hiroshima and Nagasaki sixty-five years earlier, it had sparked another form of arms race—one with severe, unforeseeable consequences.
“This has a whiff of August 1945,” Hayden would say later in a speech. “Somebody just used a new weapon, and this weapon will not be put back in the box.”
PART III
EVOLUTION
The power to destroy a thing is the absolute control over it.
15
WARNINGS
In late 2015, half a decade after Stuxnet opened a Pandora’s box of digital threats to the physical world, the first monster had finally emerged from it. That monster was Sandworm.
The Christmas blackout attack on Ukraine made clear that Russia’s hackers were indeed waging cyberwar—perhaps the first true, wide-scale cyberwar in history. They had crossed the same line as Stuxnet’s creators, from digital hacking to tangible sabotage. And they had also crossed a line from military to civilian, combining the unrestricted hybrid-warfare tactics of Estonia and Georgia with vastly more sophisticated and dangerous hacking techniques.
But even in late January 2016, only a handful of people in the world were aware of that ongoing threat. Two of them were Mike Assante and Rob Lee. When Assante had returned from the U.S. delegation’s fact-finding trip to Ukraine, he couldn’t share what he’d learned with Lee, since the agencies involved had put a firewall around the information as “for official use only.” But Lee, working from the network logs his Ukrainian contacts had shared with him and other forensic evidence, had already pieced together the anatomy of an extraordinary, multipart intrusion: BlackEnergy, KillDisk, rewritten firmware to lock out defenders, the telephone DDoS attack, disabling on-site electrical backups, and finally the phantom mouse attack that had hijacked the controls of the utility operators.
There was nothing to stop Sandworm from attacking again. Lee and Assante agreed they had played the government’s bureaucratic games long enough. It was time to publish a full report and warn the world.
But as Lee and Assante assembled their findings, they learned that the White House was still insisting on keeping the details of Ukraine’s blackout out of the public eye until the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Readiness Team, or ICS-CERT, could publish a warning to electric utilities. When that report finally came in late February—two months after Sandworm’s attack—it included a statement that left Lee furious: “Public reports indicate that the BlackEnergy (BE) malware was discovered on the companies’ computer networks, however it is important to note that the role of BE in this event remains unknown pending further technical analysis.”
Lee and Assante knew perfectly well how BlackEnergy had been used in the attack: It was the remote-access Trojan planted on victim machines that had begun the long, devious chain of intrusions, leading up to the hackers opening the utilities’ circuit breakers.
Lee saw that ICS-CERT statement as practically a cover-up. By questioning BlackEnergy’s role in the attack, or even its existence on the utilities’ network, the DHS was obscuring a key fact: that the hackers who’d planted that malware had used the same tool to target American utilities just a year earlier—that Americans, too, were at risk.
“The message was: ‘This doesn’t map to us; this is a Ukrainian thing,’ ” says Lee. “They misled the entire community.”
* * *
■
Over the next weeks, Lee says he protested in meetings and phone calls with contacts in the Department of Homeland Security, the Department of Energy, the NSA, and even the CIA, arguing that the White House and CERT were downplaying a serious, unprecedented new hacker threat that loomed over not just Ukraine but western Europe and the United States. He went so far as to publish an angry blog post on the SANS website. The gist of that entry, as Lee summarizes it today, was this: “This is bullshit. People need to know.” The actual text is lost to history; Assante asked Lee to delete the post out of political discretion.
Meanwhile, Lee and Assante fought with the White House for weeks over what they could publicly reveal about the blackout attacks as White House officials insisted on one revision after another to remove details they considered classified. After a month, the SANS researchers resorted to publishing their report through the Electricity Information Sharing and Analysis Center, or E-ISAC, a part of the North American Electric Reliability Corporation that answered to Congress, not the executive branch. The Obama administration had objected to the release until the last minute.
Even then, through that spring, Lee says he found himself combating misinformed or Pollyannaish government officials who had told energy utilities the Ukrainian attacks couldn’t have occurred in the United States. Representatives from the Department of Energy and NERC had comforted grid operators that the Ukrainians had used pirated software, had left their networks unsecured, and hadn’t even run antivirus software. None of that was true, according to Lee and Assante.
But above all, Lee argued that the U.S. government had made an even greater, irreparable mistake: not simply being slow to warn the public and potential targets about Sandworm, or downplaying its dangers, but failing to send a message to Sandworm itself—or anyone else who might follow its path.
For years, since the first warnings of cyberwar in the late 1990s, hacker-induced blackouts had been the nightmare scenario that kept generals, grid operators, and security wonks awake at night. They had imagined and war-gamed military cyberattacks on the power grid for decades. Even President Clinton had spoken about the need to be prepared for that most fundamental form of digital sabotage, nearly fifteen years before Ukraine’s blackout.
Now, as Lee saw it, the moment had finally come, and the U.S. government had done little more than sweep the incident under the rug. Perhaps most dangerous of all, it hadn’t issued a single public statement condemning the attack. “We talk and talk and talk about this red line for years, and then, when someone crosses it, we say nothing,” Lee said. “Someone in government needed to stand up and say a cyberattack on civilian infrastructure is something we won’t stand for.”
In fact, just a year before, the federal government had offered exactly the sort of response Lee had called for, though for a less novel form of attack. In December 2014, North Korean hackers posing as a hacktivist group known as the Guardians of Peace revealed they had broken into the servers of Sony Pictures in retaliation for its comedy film The Interview, which depicted the assassination of the North Korean dictator Kim Jong Un. The intruders destroyed the contents of thousands of computers and stole reams of confidential information that they later leaked onto the web, trickling the files out for weeks, including four unreleased feature films.
In the weeks following Sony’s breach, the FBI issued a public statement swiftly identifying North Korea as the culprit, cutting through its hacktivism false flag. The FBI director, James Comey, went so far as to give a public speech laying out the evidence for North Korea’s involvement, including how the hackers had failed on multiple occasions to use proxy computers as they’d intended to, and thus revealed IP addresses linked to their previous hacking operations—bread crumbs that led back to the Kim regime. President Obama himself spoke about the attack in a White House press conference, warning the world that the United States wouldn’t tolerate North Korea’s digital aggression.
“They caused a lot of damage, and we will respond. We will respond proportionally, and we’ll respond in a place and time and manner that we choose,” President Obama said. (The exact nature of that response has never been confirmed, but North Korea did experience a nationwide internet outage just days later, and the administration announced new financial sanctions against the Kim regime the next month.)
“This points to the need for us to work with the international community,” Obama continued, “to start setting up some very clear rules of the road in terms of how the internet and cyber operates.”
And yet a year later, when Russian hackers had launched a far broader and more dangerous attack deep inside civil infrastructure, no government official offered statements about proportional responses or international “rules of the road.” No U.S. agency even named Russia as the offender, despite the numerous clues available to any researcher who looked. The Obama administration was virtually silent.
America and the world had lost a once-in-history chance, Lee argues, to definitively establish a set of norms to protect civilians in a new age of cyberwar. “It was a missed opportunity,” he says. “If you say you won’t allow something and then it happens and there’s crickets, you’re effectively condoning it.”
* * *
■
In fact, Obama’s most senior cybersecurity-focused official never doubted the gravity of Sandworm’s blackout attack. In late January, not long after the delegation to Ukraine had flown back to Washington, J. Michael Daniel sat in a highly secured situation room in the Eisenhower Executive Office Building, just beyond the grounds of the West Wing, receiving a briefing from Department of Homeland Security officials on the results of that fact-finding trip. Daniel, a soft-spoken career civil servant with a kind, nervous face and slightly thinning hair, listened carefully. Then he walked back down the hall to his office to meet with his own staff, who would assemble a report for the national security advisor and, in turn, President Obama.
As he spoke with the White House aides about what the president should know, Daniel found himself marveling aloud at the brazenness of the attackers. “We’ve clearly crossed the Rubicon,” he remembers saying, echoing Michael Hayden’s comments on Stuxnet three years earlier. “This is something new.”