Microsoft responded quietly to an internal database hack four years ago
Microsoft tightened up security after the breach walling the database off from the corporate network and requiring two authentications for access.
Microsoft Corp’s secret internal database for tracking bugs in its own software was broken into by a highly sophisticated hacking group more than four years ago, according to five former employees, in only the second known breach of such a corporate database.
The company did not disclose the extent of the attack to the public or its customers after its discovery in 2013, but the five former employees described it to Reuters in separate interviews. Microsoft declined to discuss the incident.
The database contained descriptions of critical and unfixed vulnerabilities in some of the most widely used software in the world, including the Windows operating system. Spies for governments around the globe and other hackers covet such information because it shows them how to create tools for electronic break-ins.
The Microsoft flaws were fixed likely within months of the hack, according to the former employees. Yet speaking out for the first time, these former employees, as well as U.S. officials informed of the breach by Reuters, said it alarmed them because the hackers could have used the data at the time to mount attacks elsewhere, spreading their reach into government and corporate networks.
“Bad guys with inside access to that information would literally have a ‘skeleton key’ for hundreds of millions of computers around the world,” said Eric Rosenbach, who was U.S. deputy assistant secretary of defense for cyber at the time.
Companies of all stripes now are ramping up efforts to find and fix bugs in their software amid a wave of damaging hacking attacks. Many firms, including Microsoft, pay security researchers and hackers “bounties” for information about flaws – increasing the flow of bug data and rendering efforts to secure the material more urgent than ever.
In an email responding to questions from Reuters, Microsoft said: “Our security teams actively monitor cyber threats to help us prioritize and take appropriate action to keep customers protected.”
Sometime after learning of the attack, Microsoft went back and looked at breaches of other organizations around then, the five ex-employees said. It found no evidence that the stolen information had been used in those breaches.
Two current employees said the company stands by that assessment. Three of the former employees assert the study had too little data to be conclusive.
Microsoft tightened up security after the breach, the former employees said, walling the database off from the corporate network and requiring two authentications for access.
The dangers posed by information on such software vulnerabilities became a matter of broad public debate this year, after a National Security Agency stockpile of hacking tools was stolen, published and then used in the destructive “WannaCry” attacks against U.K. hospitals and other facilities.
After WannaCry, Microsoft President Brad Smith compared the NSA’s loss to the “the U.S. military having some of its Tomahawk missiles stolen,” and cited “the damage to civilians that comes from hoarding these vulnerabilities.”
Only one breach of a big database from a software company has been disclosed. In 2015, the nonprofit Mozilla Foundation — which develops the Firefox web browser — said an attacker had gotten access to a database that included 10 severe and unpatched flaws. One of those flaws was then leveraged in an attack on Firefox users, Mozilla disclosed at the time.
In contrast to Microsoft’s approach, Mozilla provided extensive details of the breach and urged its customers to take action.
Mozilla Chief Business and Legal Officer Denelle Dixon said the foundation told the public about what it knew in 2015 “not only inform and help protect our users, but also to help ourselves and other companies learn, and finally because openness and transparency are core to our mission.”
The Microsoft matter should remind companies to treat accurate bug reports as the “keys to the kingdom,” said Mark Weatherford, who was deputy undersecretary for cybersecurity at the U.S. Department of Homeland Security when Microsoft learned of the breach.
Like the Pentagon’s Rosenbach, Weatherford said he had not known of the Microsoft attack. Weatherford noted that most companies have strict security procedures around intellectual property and other sensitive corporate information. “Your bug repository should be equally important,” he said.
Alarm Spreads After Internal Probe
Microsoft discovered the database breach in early 2013 after a highly skilled hacking group broke into computers at a number of major tech companies, including Apple Inc, Facebook Inc and Twitter Inc.
The group, variously called Morpho, Butterfly and Wild Neutron by security researchers elsewhere, exploited a flaw in the Java programming language to penetrate employees’ Apple Macintosh computers and then move to company networks.
The group remains active as one of the most proficient and mysterious hacking groups known to be in operation, according to security researchers. Experts can’t agree about whether it is backed by a national government, let alone which one.
More than a week after stories about the breaches first appeared in 2013, Microsoft published a brief statement that portrayed its own break-in as limited and made no reference to the bug database.
“As reported by Facebook and Apple, Microsoft can confirm that we also recently experienced a similar security intrusion,” the company said on 22 February, 2013.
“We found a small number of computers, including some in our Mac business unit, that were infected by malicious software using techniques similar to those documented by other organizations. We have no evidence of customer data being affected, and our investigation is ongoing.”
Inside the company, alarm spread as officials realized the database for tracking patches had been compromised, according to the five former security employees. They said the database was poorly protected, with access possible via little more than a password.
Concerns that hackers were using stolen bugs to conduct new attacks prompted Microsoft to compare the timing of those breaches with when the flaws had entered the database and when they were patched, according to the five former employees.
These people said the study concluded that even though the bugs in the database were used in ensuing hacking attacks, the perpetrators could have gotten the information elsewhere.
That finding helped justify Microsoft’s decision not to disclose the breach, the former employees said, and in many cases patches already had been released to its customers.
Three of the five former employees Reuters spoke with said the study could not rule out stolen bugs having been used in follow-on attacks.
“They absolutely discovered that bugs had been taken,” said one. “Whether or not those bugs were in use, I don’t think they did a very thorough job of discovering.”
That’s partly because Microsoft relied on automated reports from software crashes to tell when attacks started showing up. The problem with this approach, some security experts say, is that most sophisticated attacks do not cause crashes, and the most targeted machines — such as those with sensitive government information — are the least likely to allow automated reporting.
The exploit infected over 200,000 computers in 150 countries, crippling everything from hospitals to logistics firms.
An adviser to Ukraine's interior minister said earlier in the day that the virus got into computer systems via "phishing" emails written in Russian and Ukrainian designed to lure employees into opening them. According to the state security agency, the emails contained infected Word documents or PDF files as attachments.
Modern encryption standards make it almost impossible to recover encrypted data without an immense amount of computing power at your disposal, and even then, the chances of recovery are slim.