The City of Baltimore has yet to fully recover from a ransomware attack that hit the city on May 7th. Important city services have been unavailable for weeks, and it is still not clear when all systems will be fully restored. The city estimates that the direct and indirect costs resulting from the attack will total around $18.2 million(!). But wait, there’s more! For the attackers apparently took advantage of a two-year-old flaw, which means that the entire debacle could have been prevented if someone, anyone on the city payroll had bothered to deploy a free security fix at any point in the past two years. This story is about as ugly as they get, which makes it a perfect example of why it is crucial to follow best security practices at all times.
So what exactly happened in Baltimore?
The ransomware infection forced the city to shut down most of its servers. Luckily, critical services remained operational, unlike a year earlier when a cyberattack had knocked out Baltimore’s 911 emergency systems. However, important services like 311 experienced disruptions, while many other systems, including the email systems of the city council and the police department, were rendered unavailable. It quickly became clear that the ransomware crippling the city was Robbinhood, a strain of file-encrypting malware that had been around for less than two months. Infected computers displayed a note demanding a ransom of 3 Bitcoins per system (about $17,600 at the time). Alternative, the city could pay 13 Bitcoins (about $76,280) to regain access to all impacted systems. Payment had to be completed within 4 days to prevent a price hike and no later than 10 days after the infection.
The city refused to pay. Instead, it “shifted operations into manual mode and established other workarounds to facilitate the continued delivery of services to the public,” Mayor Bernard C. “Jack” Young stated ten days later, adding that the city was cooperating with an FBI investigation into the attack. Meanwhile, many systems remained offline. When the city’s recovery efforts were well into the third week, the New York Times added an unexpected twist to the story by reporting that “a key component of the malware that cybercriminals used in the attack was developed at taxpayer expense […] at the National Security Agency.” Supposedly, the threat actors obtained access to Baltimore’s network by taking advantage of EternalBlue, a leaked NSA-exploit that has been used in a number of major cyberattacks, including the global WannaCry ransomware outbreak of May of 2017.1 A day after the NYT story was published, the city council president of Baltimore announced that he is seeking a federal emergency and disaster declaration. If granted, the Federal Emergency Management Agency (FEMA) will use its budget to reimburse the city for the costs it has incurred due to the incident.
In short, as a result of Baltimore’s failure to install a critical security patch within two years after it was issued, the city suffered a cyberattack so devastating that it may soon be considered a federal disaster.
The tragic tale of Baltimore city touches upon a larger problem that goes way beyond a single unpatched flaw at a single organization. Industry research shows that 27%of organizations has experienced a security breach that was made possible by an unpatched vulnerability in their environment. So the Baltimore incident highlights structural shortcomings in terms of vulnerability management and patch management that are putting many companies at risk.
The poor state of vulnerability and patch management
According to an Edgescan report(pdf), organizations take an average of 77.5 days to patch or otherwise mitigate newly discovered application layer vulnerabilities. That number is 81.75 days for infrastructure flaws. The average mitigation times for critical vulnerabilities are slightly lower, but still well over two months, giving attackers ample time to develop and utilize exploits for these flaws. Recent research by Kenna paints an even bleaker picture. It shows that the average organization needs 4 weeks to address 25% of the vulnerabilities affecting its systems and close to 100 days to fix only half of the flaws. Moreover, companies fail to remediate at least 25% of vulnerabilities within a year. While flaws in Microsoft products tend to get fixed a little faster, organizations still need 37 days on average to install patches for just half of these vulnerabilities and 134 days (4,5 months) to address 75%.
A survey(pdf) by SANS shows that these delays often result from inadequate patch management strategies. The thing is, most organizations (57.5%) only issue patches once a month. In other words, for the majority of firms it’s standard practice to give threat actors up to a month to exploit newly discovered vulnerabilities in their environment. Moreover, if a specific fix is not included in the monthly patch cycle for whatever reason, the period of exposure is prolonged by another month. Barely one in four (24.9%) organizations apply patches on a weekly basis, while 7.7% of companies actually work with quarterly patch cycles. From a security perspective, patching vulnerable systems only four times a year is nothing short of absolute madness.
But patch management is only part of the story, because in order to fix vulnerabilities, organizations need to have a complete picture of their digital assets and of the security flaws affecting those assets. This where vulnerability management comes in. Unfortunately, only 54.7% of companies have a formal vulnerability management strategy (SANS). Nearly three in ten (29.1%) firms rely on an informal program, which is risky, while 16% don’t have a program all, which is absolute madness as well. Furthermore, a recent Tripwire report(pdf) indicates that many firms struggle to maintain visibility in their attack surface by keeping track of the devices and applications on their network. While 22% of firms can detect changes to connected hardware and software within minutes and 37% achieve this within hours, one in three firms (33%) needs longer than that, and 11% is not able to keep track of these changes at all, which is – you guessed it – absolute and utter madness.
What you can do to stay safe
Monitor your attack surface – A crucial first step toward protecting your organization is keeping track of the systems that you are exposing to the Internet, since these can provide hackers with an initial foothold on your network if they are vulnerable. A great tool for this is Vonahi’s ExposureScout. This attack surface intelligence solution is available for FREE or as a monthly service. For more information see: https://www.vonahi.io/resources/internet-exposure-scan
Follow best security practices including a proper patch management strategy – Vulnerable systems need to be patched in a responsible and timely manner to avoid scenarios like the one currently playing out in Baltimore. For more information, check out our recent Whitepaper on cyber security best practices for small to mid-sized businesses: https://www.vonahi.io/resources/whitepapers/top-10-cyber-security-best-practices-for-smbs/
Conduct regular vulnerability assessments – Networks are like living organisms in that they are constantly changing. A new device is added here, new software installed there, while outdated assets are removed (or should be, at least). Vulnerability scans allow you to keep track of your environment and of the weak spots that could put you at risk. You can use the findings to fix these issues and improve your patch management program. When it comes to Baltimore, the truth is that a simple vulnerability scan would have uncovered their EternalBlue Achilles heel.
Consider penetration testing– By simulating realistic attack scenarios, penetration tests expose weak links in your cyber defenses that would not be uncovered during a vulnerability scan because they lie below the surface. Penetration tests also provide valuable insights on the actual damage that threat actors can inflict by exploiting certain flaws. Vonahi's vPenTest is an automated network penetration test platform that makes penetration testing more affordable, accurate and faster, allowing companies to schedule a pentest with the click of a button. For more information see: https://www.vonahi.io/services/vpentest
Final note: Patch BlueKeep to prevent another WannaCry
In the past weeks, Microsoft has issued two warnings about BlueKeep (CVE-2019-0708), a critical security flaw impacting Remote Desktop Protocol (RDP) implementations on Windows 7, Windows Server 2008 as well as older Windows operating systems. Because BlueKeep is a "wormable" flaw, threat actors could use it to launch a massive, self-propagating attack similar to the 2017 WannaCry outbreak. Even though Microsoft issued a patch for the flaw on May 14, a recent analysis found that close to a million devices may still be vulnerable. It is therefore crucial that you patch this flaw on any and all vulnerable systems in your organization.
1 The NYT article has been the subject of intense debate across the security and intelligence communities because it argues that by designing a highly potent cyberweapon, and subsequently failing to keep the tool out of the hands of threat actors, the NSA is at least partially responsible for what happened in Baltimore. While we agree that this is an important issue, it’s beyond the scope and point of this article.
About Vonahi Security
Vonahi Security is a cybersecurity consulting firm that offers modern consulting services to help organizations achieve both compliance and security best practices. With over 30 years of combined industry experience in both offensive and defensive security operations, our team of certified consultants have experience working with a significant number of organizations, industries, networks, and technologies. Our service expertise includes Managed Security, Adversary Simulations, Strategy & Review, and User Education & Awareness. Vonahi Security is headquartered in Atlanta, GA. To learn more, visit www.vonahi.io
Stay Informed