Sandworm Read online

Page 18


  Within hours of the Shadow Brokers’ release, however, Microsoft put out an unexpected announcement: The zero-day vulnerability EternalBlue exploited wasn’t technically a zero day after all. In March, the company had, with no explanation at the time, released a patch for its Server Message Block flaw that neutered the NSA’s hacking technique, a full month before the Shadow Brokers had leaked it. The Washington Post would later confirm that the NSA had quietly warned Microsoft of the flaw when it learned that EternalBlue was among the tools the Shadow Brokers had stolen.

  With the news that a security patch was available, a new question arose: How many people had actually installed that patch? Updating software protections around the world has never been a simple fix so much as a complex epidemiological problem. Systems administrators neglect patches, or don’t account for all their computers, or skip patches for fear they’ll break features of software they need, or run pirated software that doesn’t receive patches at all. All of that means getting a security update out to vulnerable machines is often as involved and imperfect a process as getting humans around the world vaccinated, long after a vaccine is discovered.

  Over the next days, hints of the population of machines still unpatched against EternalBlue began to emerge. Security researchers had no way to determine the number of EternalBlue attacks directly, but they could scan the internet for another complementary piece of NSA malware called DoublePulsar, a backdoor program that had also been released by the Shadow Brokers and that was designed to be installed by EternalBlue on target machines. When anyone sent a computer infected with DoublePulsar a certain kind of network ping, it would respond with a recognizable, distinct acknowledgment.

  Curious researchers sent out those pings en masse to the entire internet. They immediately received tens of thousands of unique responses, each of which likely signaled a computer that had been hacked with the NSA’s skeleton key. Within a week of the Shadow Brokers’ release, that number was above 100,000. After two weeks, the count of potential victims had topped 400,000. The internet’s EternalBlue nightmare wasn’t over. And its full scale was about to become clear.

  * * *

  ■

  Around 2:30 on a Friday afternoon, Marcus Hutchins returned from lunch at his local fish-and-chips shop in the small English seaside town of Ilfracombe, sat down in front of a computer in his bedroom, and discovered that the internet was on fire.

  Hutchins, a soft-spoken English twenty-two-year-old with a semi-controlled explosion of brown curly hair, was supposed to have the day off from his job as a malware analyst for the cybersecurity firm Kryptos Logic. But Hutchins was not one to draw neat boundaries in his life: He worked from home, and his home office was also his first-floor bedroom in his parents’ house. That bedroom was set up with three powerful desktop computers—each equipped with multiple monitors and water-cooling radiators to accommodate high-performance processing—as well as two laptops and a full rack of blinking servers.

  Hutchins used this elaborate bedroom rig to operate his own self-contained malware research center. On his server setup, he ran virtual machines that could simulate all manner of computers to test out new malware and safely watch it in action. One screen displayed a constant feed of spam and phishing emails he was collecting to analyze their sources and the evil programs often laced into their attachments.

  On another of his screens, Hutchins opened a U.K. cybersecurity research forum where he’d been trying to learn more about a certain piece of bank-fraud malware. He found a crisis unfolding. The British National Health Service was being ambushed with a ransomware outbreak. And this wasn’t the normal criminal ransomware that was increasingly targeting critical institutions like hospitals and police departments, encrypting their data and holding it hostage. This was something else: Thousands of the agency’s computers were being infected, and the number was growing with inhuman speed.

  The victim computers were locked, with a red screen demanding they pay $300 in bitcoin. “Your important files are encrypted,” the message read. “Maybe you are busy looking for a way to recover your files, but do not waste your time. Nobody can recover your files without our decryption service.” On the left of the screen, a countdown timer ticked down the hours over seven days until the hackers would delete the files’ decryption keys, leaving the computers’ data permanently, irrevocably scrambled.

  Researchers were calling the new ransomware WannaCry—an evocative name based on the .wncry extension it added to the file names after encrypting them. And soon it became clear exactly why the code was so virulent: It was using EternalBlue to spread. Each infected machine would scan local networks and the internet for machines that were still unpatched against that leaked NSA tool, use it to break into as many other computers as possible, and repeat.

  As WannaCry proliferated, chaos ensued. Thousands of people had their doctors’ appointments canceled in regions across the U.K. Some emergency rooms were temporarily closed, forcing patients to travel farther to hospitals lucky enough to have been spared by the attack. Hutchins could see that Britain’s woes were only a slice of a global disaster. The Spanish telecommunications firm Telefónica had been hit, too. So had Sberbank in Russia, the German railway firm Deutsche Bahn, and the French carmaker Renault, along with other victims as far-flung as universities in China and police departments in India.

  The United States had, by sheer luck, largely been spared so far. But as the ransomware wave swelled, it was a matter of hours or even minutes until America would be engulfed, too.

  The nightmare of an uncontrolled NSA-zero-day-propelled worm wreaking havoc across the world had come to pass. And the result was the worst ransomware outbreak anyone had ever seen. “I picked a hell of a fucking week to take off work,” Hutchins wrote on Twitter.

  * * *

  ■

  A hacker friend who went by the name “Kafeine” sent Hutchins a copy of WannaCry’s code, and Hutchins quickly began trying to dissect it. First, he spun up a simulated computer on his server, complete with fake files for the ransomware to encrypt, and ran the program in that quarantined test environment. He immediately noticed that before encrypting the fake files, the malware sent out a query to a certain very random-looking web address: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com. That struck Hutchins as significant, if not unusual: A piece of malware pinging back to a domain like that usually represented communications with a command-and-control server somewhere that might be giving the infected computer instructions.

  Hutchins copied that long website string into his web browser and found, to his surprise, that no such site existed. So he visited the domain registrar Namecheap and bought that unattractive web address for $10.69. Hutchins hoped that in doing so, he might be able to steal control of some part of WannaCry’s horde of victim computers away from the malware’s creators. At least he might gain a tool to monitor the number and location of infected machines, a move that malware analysts call “sinkholing.”

  Sure enough, as soon as Hutchins set up that domain on a cluster of servers hosted by his employer, Kryptos Logic, it was bombarded with thousands of connections from every new computer that was being infected by WannaCry around the world. Hutchins could now see the enormous scale of the attack firsthand. And as he tweeted about his work, he began to be flooded with hundreds of emails from other researchers, journalists, and systems administrators trying to learn more about the plague devouring global networks. With his sinkhole domain, Hutchins was now suddenly pulling in information about those infections that no one else on the planet possessed.

  For the next four hours, he responded to those emails and worked frantically to debug a map he was building to track the new infections popping up across the world. It was only at 6:30 p.m., around four hours after registering the domain, that his hacker friend Kafeine sent him a tweet posted by another security researcher, Darien Huss. It put forward a simple statement that shocked Hutc
hins: “Execution fails now that domain has been sinkholed.” In other words, since Hutchins’s domain had first appeared online, WannaCry’s new infections had continued to spread, but they hadn’t actually done any new damage. The worm seemed to be neutralized.

  Huss’s tweet included a snippet of WannaCry’s code that he’d reverse engineered. The code’s logic showed that before encrypting any files, the malware first checked if it could reach Hutchins’s web address. If not, it went ahead with corrupting the computer’s contents. If it did reach that address, it simply stopped in its tracks.

  Hutchins hadn’t found the malware’s command-and-control address. He’d found its kill switch. The domain he’d registered was a way to simply, instantly turn off WannaCry’s mayhem around the world. It was as if he had fired his proton torpedoes through the Death Star’s exhaust port and into its reactor core, blown it up, and saved the galaxy, but without understanding what he was doing or even noticing his action’s effects for four hours.

  When he saw Huss’s tweet, Hutchins’s heart started racing. Could it be true? He needed to try his own test for confirmation. He ran a simulation on his server of a WannaCry infection and allowed it to reach out to his domain. Sure enough, it ceased its evil behavior the instant it connected. Then he ran the test again, this time blocking the malware’s connection to his sinkhole. In that second test, the computer’s files were immediately encrypted, and WannaCry’s menacing ransom message popped up on his screen. The test had confirmed that his kill switch worked.

  Hutchins reacted in a way that perhaps no one ever before in history has reacted to seeing his computer paralyzed with ransomware: He leaped up from his chair and jumped around his bedroom, overtaken with joy.

  * * *

  ■

  The goal of WannaCry’s creators remains a mystery. Were they seeking to make as much money as possible from their supercharged ransomware scheme? Or merely to inflict maximal global chaos? Either way, building a kill switch into their malware seemed like a strangely sloppy act of self-sabotage.*1

  The WannaCry programmers had been careless in other ways, too. The payment mechanism built into their code was, effectively, useless: Unlike better-designed ransomware, WannaCry had no automated system for distributing decryption keys to victims who had paid, or even keeping track of who had paid and who hadn’t. When that became clear to victims, they stopped paying. The entire scheme generated a total of less than $200,000, a smaller sum than the annual salary of many of the individual malware analysts tracking it.

  Some researchers came to the conclusion that WannaCry must have been released prematurely: Perhaps its creators had been testing their worm, and then, as worms tend to do—as Stuxnet had done seven years earlier—it spread beyond its creators’ control, before it was truly ready.

  Finally, in another critical act of carelessness, WannaCry’s coders had left clues about their identity, too. Within days, security researchers at Google and the Russian cybersecurity firm Kaspersky had noticed that code used in WannaCry overlapped with a favorite backdoor program of a group of North Korean government hackers known as Lazarus. By December 2017, the Trump White House would announce that it had determined North Korea was behind the attack. The same group of hackers who had devastated Sony three years earlier had now unleashed that same destruction on every network in the world, and only an accidental kill switch had prevented utter disaster.*2

  * * *

  ■

  By the end of 2017, theories of how the Shadow Brokers had pulled off their shocking theft of NSA secrets would begin to come to light, too. In December of that year, a sixty-seven-year-old former NSA staffer and developer for the agency’s Tailored Access Operations hacking team named Nghia Hoang Pho pleaded guilty to violating his security clearances. He’d taken home enormous troves of classified materials. He’d later tell a Maryland court that after bad performance reviews he’d merely sought to study the materials as a way to get ahead in his work. Pho was sentenced to sixty-six months in prison.

  That case connected with another piece of the narrative reported by The Wall Street Journal from months earlier, claiming that Russian government hackers had used their access to the antivirus software of Moscow-based Kaspersky Labs to steal a vast collection of NSA files from the home computer of a contract employee of the agency. The contractor, the report stated, had been foolish enough to not only violate his clearances and bring the top secret material home but also to run Kaspersky’s software, which—like most antivirus programs—included a capability that allowed the program to upload files to the company’s remote servers for analysis.

  Kaspersky responded in a statement, denying that it had any “inappropriate ties” to the Russian government that might have let Kremlin hackers exploit its antivirus code. A few weeks later, the company followed up with the results of an internal investigation: It had, the company admitted, uploaded a collection of NSA hacking tools in 2014. But it claimed to have immediately deleted them upon discovering what the files represented.*3

  Even as those clues added to the circumstantial evidence of Russia’s responsibility for the leak of the NSA’s secret armory, nothing suggested that either the Shadow Brokers or WannaCry was connected to Sandworm. But just as artists inspire one another, Sandworm was no doubt watching and learning from its hacker peers. The Shadow Brokers had made available a powerful hacking tool that a team of hyper-bellicose cyberwarriors could hardly ignore.

  The WannaCry worm that followed offered Sandworm a chance to observe a weapon of mass disruption in action—and, it would turn out, a few ideas about how to build an even more explosive one.

  *1 Just why that kill switch existed is another mystery. But as Hutchins and other malware researchers puzzled over that flaw, they came to believe that it might have been intended as a sort of anti-forensic technique, designed to make it harder for defenders to decipher WannaCry’s behavior. In that theory, the malware’s attempt to communicate out to a nonexistent domain was a test of whether the malware was running on a real victim’s machine or on some security researcher’s simulated one.

  In the sort of virtual machine Hutchins ran on his server for malware observation, the researcher wants the malware to think it’s running in the wild, but without ever letting it interact with the actual internet; otherwise it might start doing nasty things like sending spam or attacking other computers. So every attempt to connect with a web domain is answered with some arbitrary response, even if the website doesn’t actually exist. If the malware reaches out to an address its author knows doesn’t exist and still gets a response, it can cleverly deduce that it’s running in a simulation, like Neo in The Matrix after he’s taken the red pill. In that case, when the malware realizes it’s under the researcher’s microscope, it turns off its malicious features and behaves entirely innocently.

  Of course, if that was in fact the ransomware programmers’ thinking, they’d been far too clever for their own good. The result was that the mechanism designed to make their feature appear harmless could actually be used to render it ineffectual, as Hutchins did.

  *2 Hutchins’s role as the hero of the WannaCry story would be complicated just three months later, when he was arrested by the FBI after attending the DEF CON hacker conference. Hutchins was charged with computer fraud and abuse related to his alleged creation and sale of banking malware years earlier. In July 2019, however, a judge sentenced him to no jail time, in part due to his WannaCry work.

  *3 Aside from Nghia Hoang Pho, another NSA staffer named Hal Martin remains a suspect in the Shadow Brokers case as of this writing. Martin, a contractor for the agency’s TAO group, was arrested in late 2016 for taking home terabytes of classified materials from the agency, much like Pho. In a court filing two years later, the judge presiding over his case revealed that Martin had sent suspicious private Twitter messages to two security researchers at Kaspersky in August 2016 asking for a meeting, wh
ich investigators believed might have been intended to sell or share classified information. Martin’s messages were sent just hours before the Shadow Brokers’ first leaks were announced. Kaspersky reported Martin to U.S. government contacts, leading to a subsequent raid on his house and his arrest.

  23

  MIMIKATZ

  In May 2012, Benjamin Delpy walked into his room at the President Hotel in Moscow and found a man dressed in a dark suit with his hands on Delpy’s laptop.

  Just a few minutes earlier, the twenty-five-year-old French programmer had made a quick trip to the front desk to complain about the room’s internet connection. He had arrived two days ahead of a talk he was scheduled to give at the nearby security conference Positive Hack Days, only to discover that his room had no Wi-Fi connection. Nor was the ethernet jack working. Downstairs, one of the hotel’s staff insisted he sit in the lobby while a technician was sent up to fix it. Delpy refused and went back to wait in the room instead.

  When he returned, as Delpy tells it, he was shocked to find the stranger standing at the room’s desk, a small black roller-board suitcase by his side, his fingers hurriedly retracting from Delpy’s keyboard. The laptop still showed a locked Windows log-in screen.

  The man mumbled an apology about his key card working on the wrong room, brushed past Delpy, and was out the door before Delpy could even react. “It was all very strange for me,” Delpy said. “Like being in a spy film.”

  It didn’t take Delpy long to guess why his laptop had been the target of a literal black-bag job. That computer contained the subject of his presentation at the Moscow conference, an early version of a program he’d written called Mimikatz.