Sandworm Page 8
“The message was, ‘I’m going to make you feel this everywhere.’ Boom boom boom boom boom boom boom,” Assante says, imagining the attack from the perspective of a bewildered grid operator. “These attackers must have seemed like they were gods.”
That night, for the next leg of their trip, the team boarded a flight to the western Ukrainian city of Ivano-Frankivsk, at the foot of the Carpathian Mountains, arriving at its tiny Soviet-era airport in the midst of a snowstorm. The next morning they visited the headquarters of Prykarpattyaoblenergo, the power company that had taken the brunt of the pre-Christmas attack.
The power company executives politely welcomed the Americans into their modern building, which sat under the looming smokestacks of the abandoned coal power plant in the same complex. Then they invited them into their boardroom, seating them at a long wooden table beneath an oil painting of the aftermath of a medieval battle.
The attack the Prykarpattyaoblenergo executives described was almost identical to the one that hit Kyivoblenergo: BlackEnergy, corrupted firmware, disrupted backup power systems, KillDisk. But in this operation, the attackers had taken another step, bombarding the company’s call centers with fake phone calls—either to obscure customers’ warnings of the power outage or simply to add another layer of chaos and humiliation. It was as if the hackers were determined to impress an audience with the full array of their capabilities or to test the range of their arsenal.
There was another difference from the other utility attacks, too. When the Americans asked whether, as in the Kiev region, cloned control software had sent the commands that shut off the power, the Prykarpattyaoblenergo engineers said no, that their circuit breakers had been opened by another method.
At this point in the meeting, the company’s technical director, a tall, serious man with black hair and ice-blue eyes, cut in. Rather than try to explain the hackers’ methods to the Americans through a translator, he offered to show them. He clicked “play” on a video he’d recorded himself on his battered iPhone 5s.
The fifty-six-second clip showed a cursor moving around the screen of one of the computers in the company’s control room. The pointer glides to the icon for one of the breakers and clicks a command to open it. The video pans from the computer’s Samsung monitor to its mouse, which hasn’t budged. Then it shows the cursor moving again, seemingly of its own accord, hovering over a breaker and attempting again to cut its flow of power as the engineers in the room ask one another who’s controlling it.
The hackers hadn’t sent their blackout commands from automated malware, or even a cloned machine, as they’d done at Kyivoblenergo. Instead, they’d exploited the company’s IT help-desk tool to take direct control of the mouse movements of the stations’ operators. They’d locked the operators out of their own user interface. And before their eyes, phantom hands had clicked through dozens of breakers—each serving power to a different swath of the region—and one by one by one, turned them cold.
PART II
ORIGINS
Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
10
FLASHBACK: AURORA
Nine years before his Ukraine trip, on a piercingly cold and windy morning in March 2007, Mike Assante arrived at an Idaho National Laboratory facility thirty-two miles west of Idaho Falls, a building in the middle of a vast, high desert landscape covered with snow and sagebrush. He walked into an auditorium inside the visitors’ center, where a small crowd was gathering. The group included officials from the Department of Homeland Security, the Department of Energy, and the North American Electric Reliability Corporation (NERC), executives from a handful of electric utilities across the country, and other researchers and engineers who, like Assante, were tasked by the national lab to spend their days imagining catastrophic threats to American critical infrastructure.
At the front of the room was an array of video monitors and data feeds, set up to face the room’s stadium seating, like mission control at a rocket launch. The screens showed live footage from several angles of a massive diesel generator. The machine was the size of a school bus, a mint green, gargantuan mass of steel weighing twenty-seven tons, about as much as an M3 Bradley tank. It sat a mile away from its audience in an electrical substation, producing enough electricity to power a hospital or a navy ship and emitting a steady roar. Waves of heat coming off its surface rippled the horizon in the video feed’s image.
Assante and his fellow INL researchers had bought the generator for $300,000 from an oil field in Alaska. They’d shipped it thousands of miles to the Idaho test site, an 890-square-mile piece of land where the national lab maintained a sizable power grid for testing purposes, complete with sixty-one miles of transmission lines and seven electrical substations.
Now, if Assante had done his job properly, they were going to destroy it. And the assembled researchers planned to kill that very expensive and resilient piece of machinery not with any physical tool or weapon but with about 140 kilobytes of data, a file smaller than the average cat GIF shared today on Twitter.
* * *
■
Three years earlier, Assante had been the chief security officer at American Electric Power, a utility with millions of customers in eleven states from Texas to Kentucky. A former navy officer turned cybersecurity engineer, Assante had long been keenly aware of the potential for hackers to attack the power grid. But he was dismayed to see that most of his peers in the electric utility industry had a relatively simplistic view of that still-theoretical and distant threat. If hackers did somehow get deep enough into a utility’s network to start opening circuit breakers, the industry’s common wisdom at the time was that staff could simply kick the intruders out of the network and flip the power back on. “We could manage it like a storm,” Assante remembers his colleagues saying. “The way it was imagined, it would be like an outage and we’d recover from the outage, and that was the limit of thinking around the risk model.”
But Assante, who had a rare level of crossover expertise between the architecture of power grids and computer security, was nagged by a more devious thought. What if attackers didn’t merely hijack the control systems of grid operators to flip switches and cause short-term blackouts, but instead reprogrammed the automated elements of the grid, components that made their own decisions about grid operations without checking with any human?
In particular, Assante had been thinking about a piece of equipment called a protective relay. Protective relays are designed to function as a safety mechanism to guard against dangerous physical conditions in electric systems. If lines overheat or a generator goes out of sync, it’s those protective relays that detect the anomaly and open a circuit breaker, disconnecting the trouble spot, saving precious hardware, even preventing fires. A protective relay functions as a kind of lifeguard for the grid.
But what if that protective relay could be paralyzed—or worse, corrupted so that it became the vehicle for an attacker’s payload?
That disturbing question was one Assante had carried over to Idaho National Laboratory from his time at the electric utility. Now, in the visitor center of the lab’s test range, he and his fellow engineers were about to put his most malicious idea into practice. The secret experiment was given a code name that would come to be synonymous with the potential for digital attacks to inflict physical consequences: Aurora.
* * *
■
The test director read out the time: 11:33 a.m. He checked with a safety engineer that the area around the lab’s diesel generator was clear of bystanders. Then he sent a go-ahead to one of the cybersecurity researchers at the national lab’s office in Idaho Falls to begin the attack. Like any real digital sabotage, this one would be performed from miles away, over the internet. The test’s simulated hacker responded by pushing roughly thirty lines o
f code from his machine to the protective relay connected to the bus-sized diesel generator.
The inside of that generator, until that exact moment of its sabotage, had been performing a kind of invisible, perfectly harmonized dance with the electric grid to which it was connected. Diesel fuel in its chambers was aerosolized and detonated with inhuman timing to move pistons that rotated a steel rod inside the generator’s engine—the full assembly was known as the “prime mover”—roughly 600 times a minute. That rotation was carried through a rubber grommet, designed to reduce any vibration, and then into the electricity-generating components: a rod with arms wrapped in copper wiring, housed between two massive magnets so that each rotation induced electrical current in the wires. Spin that mass of wound copper fast enough, and it produced 60 hertz of alternating current, feeding its power into the vastly larger grid to which it was connected.
A protective relay attached to that generator was designed to prevent it from connecting to the rest of the power system without first syncing to that exact rhythm: 60 hertz. But Assante’s hacker in Idaho Falls had just reprogrammed that safeguard device, flipping its logic on its head.
At 11:33 a.m. and 23 seconds, the protective relay observed that the generator was perfectly synced. But then its corrupted brain did the opposite of what it was meant to do: It opened a circuit breaker to disconnect the machine.
When the generator was detached from the larger circuit of Idaho National Laboratory’s electrical grid and relieved of the burden of sharing its energy with that vast system, it instantly began to accelerate, spinning faster, like a pack of horses that had been let loose from its carriage. As soon as the protective relay observed that the generator’s rotation had sped up to be fully out of sync with the rest of the grid, its maliciously flipped logic immediately reconnected it to the grid’s machinery.
The moment the diesel generator was again linked to the larger system, it was hit with the wrenching force of every other rotating generator on the grid. All of that equipment pulled the relatively small mass of the diesel generator’s own spinning components back to its original, slower speed to match its neighbors’ frequencies.
On the visitor center’s screens, the assembled audience watched the giant machine shake with sudden, terrible violence, emitting a sound like a deep crack of a whip. The entire process from the moment the malicious code had been triggered to that first shudder had spanned only a fraction of a second.
Black chunks began to fly out of an access panel on the generator, which the researchers had left open to watch its internals. Inside, the black rubber grommet that linked the two halves of the generator’s shaft was tearing itself apart.
A few seconds later, the machine shook again as the protective relay code repeated its sabotage cycle, disconnecting the machine and reconnecting it out of sync. This time a cloud of gray smoke began to spill out of the generator, perhaps the result of the rubber debris burning inside it.
Assante, despite the months of effort and millions of dollars in federal funds he’d spent developing the attack they were witnessing, somehow felt a kind of sympathy for the machine as it was being torn apart from within. “You find yourself rooting for it, like the little engine that could,” Assante remembered. “I was thinking, ‘You can make it!’ ”
The machine did not make it. After a third hit, it released a larger cloud of gray smoke. “That prime mover is toast,” an engineer standing next to Assante said. After a fourth blow, a plume of black smoke rose from the machine thirty feet into the air in a final death rattle.
The test director ended the experiment and disconnected the ruined generator from the grid one final time, leaving it deathly still. In the forensic analysis that followed, the lab’s researchers would find that the engine shaft had collided with the engine’s internal wall, leaving deep gouges in both and filling the inside of the machine with metal shavings. On the other side of the generator, its wiring and insulation had melted and burned. The machine was totaled.
In the wake of the demonstration, a silence fell over the visitor center. “It was a sober moment,” Assante remembers. The engineers had just proven without a doubt that hackers who attacked an electric utility could go beyond a temporary disruption of the victim’s operations: They could damage its most critical equipment beyond repair. “It was so vivid. You could imagine it happening to a machine in an actual plant, and it would be terrible,” Assante says. “The implication was that with just a few lines of code, you can create conditions that were physically going to be very damaging to the machines we rely on.”
But Assante also remembers feeling something weightier in the moments after the Aurora experiment. It was a sense that, like Robert Oppenheimer watching the first atomic bomb test at another U.S. national lab six decades earlier, he was witnessing the birth of something historic and immensely powerful.
“I had a very real pit in my stomach,” Assante says. “It was like a glimpse of the future.”
11
FLASHBACK: MOONLIGHT MAZE
The known history of state-sponsored hacking stretches back three decades before Russia’s hackers would switch off the power to hundreds of thousands of people and two decades before experiments like the Aurora Generator Test would prove how destructive those attacks could be. It began with a seventy-five-cent accounting error.
In 1986, Cliff Stoll, a thirty-six-year-old astronomer working as the IT administrator at Lawrence Berkeley National Laboratory, was assigned to investigate that financial anomaly: Somehow, someone had remotely used one of the lab’s shared machines without paying the usual per-minute fee that was typical for online computers at the time. He quickly realized the unauthorized user was a uniquely sophisticated hacker going by the name “Hunter” who had exploited a zero-day vulnerability in the lab’s software. Stoll would spend the next year hunting the hunter, painstakingly tracking the intruder’s movements as he stole reams of files from the lab’s network.
Eventually, Stoll and his girlfriend, Martha Matthews, created an entire fake collection of files to lure the thief while watching him use the lab’s computers as a staging point to attempt to penetrate targets including the Department of Defense’s MILNET systems, an Alabama army base, the White Sands Missile Range, a navy data center, air force bases, NASA’s Jet Propulsion Laboratory, defense contractors like SRI and BBN, and even the CIA. Meanwhile, Stoll was also tracing the hacker back to his origin: a university in Hannover, Germany.
Thanks in part to Stoll’s detective work, which he captured in his seminal cybersecurity book, The Cuckoo’s Egg, German police arrested Stoll’s hacker along with four of his West German associates. Together, they had approached East German agents with an offer to steal secrets from Western government networks and sell them to the KGB.
All five men in the crew were charged with espionage. “Hunter,” whose real name was Markus Hess, was given twenty months in prison. Two of the men agreed to cooperate with prosecutors to avoid prison time. The body of one of those cooperators, thirty-year-old Karl Koch, was later found in a forest outside Hannover, burned beyond recognition, a can of gasoline nearby.
* * *
■
Ten years after those intrusions, Russia’s hackers returned. This time, they were no longer foreign freelancers but organized, professional, and highly persistent spies. They would pillage the secrets of the American government and military for years.
Starting in October 1996, the U.S. Navy, the U.S. Air Force, and agencies including NASA, the Department of Energy, the Environmental Protection Agency, and the National Oceanic and Atmospheric Administration began detecting sporadic intrusions on their networks. Though the interlopers routed their attacks through compromised machines from Colorado to Toronto to London, the first victims of the hacking campaign nonetheless managed to trace the hackers to a Moscow-based internet service provider, Cityline.
By June 1998, the
Pentagon’s Defense Information Systems Agency was investigating the breaches, along with the FBI and London’s Metropolitan Police. They determined that the hackers were stealing an enormous volume of data from U.S. government and military agencies: By one estimate, the total haul was equivalent to a stack of paper files as high as the Washington Monument. As the investigators came to grips with the size of the unprecedented cyberespionage operation they were facing, they gave it a name: Moonlight Maze.
By 1998, it was clear that the Moonlight Maze hackers were almost certainly Russian. The timing of their operations showed that the intruders were working during Moscow daylight hours. Investigators went digging through the records of academic conferences and found that Russian scientists had attended conferences on topics that closely matched the subjects of the files they’d stolen from the U.S. agencies. One former air force forensics expert, Kevin Mandia, even reverse engineered the hackers’ tools, stripping away the code’s layers of obfuscation and pulling out strings of Russian language. (Decades later, Mandia would be John Hultquist’s boss at FireEye, the company that acquired iSight Partners following its similar discovery of Sandworm’s Russian origins.)
By all appearances, it looked like a Russian intelligence agency was pilfering the secrets of the U.S. government, in what would soon be recognized as the first state-on-state cyberspying campaign of its kind. But proving that the spies were working for the Russian government itself was far more difficult than proving they were merely located in Russia. It was a fundamental problem that would plague hacker investigations for decades to come. Unless detectives could perform the nearly impossible feat of following the footprints of an intrusion back to an actual building or identify the individuals by name, governments could easily deny all responsibility for their spying, pinning the blame on bored teenagers or criminal gangs.