GAO’s Report on Equifax: Lessons Learned From the Data Protection Report on Equifax actions

Blog

Everyone knows about the Equifax event, but unless you know what GAO actually stands for—and what it does—you probably have not paid any attention to the Government Accountability Office’s report to Congressional Requesters. My new role as cybersecurity counsel (and my curiosity) drove me to read the entire report to understand why Congress asked GAO to produce the report. Here are three lessons worth noting.

Background

First, let’s state the obvious: there are only four major credit reporting bureaus in the United States, and in order to live successfully here you must use them. Even if you never order a credit report on your own, credit reporting is an unavoidable necessity of modern-day life, from turning on water at your residence to getting a cell phone to buying a car or house. The for-profit companies have contracts with Federal agencies and countless corporations to provide user identification services, e.g., for the IRS, SSA, and US Postal Service. This is precisely why Congress requested the report—we all have to use these companies in some form or fashion, and the Equifax breach affected 145.5 million people. While no one is suggesting that credit reporting bureaus be federalized and controlled by the government, they should be required to comply with federal rules, such as the Privacy Act of 1974, and applicable financial regulations. These private companies have more information on each of us than the IRS has, yet they are currently not required to abide by the same laws and rules as the government! Though not foolproof, those laws and regulations exist to help keep data safe. So if you have been on the fence about the importance of cybersecurity in your organization, keep reading for three valuable lessons you can take from the GAO’s report on the Equifax breach.

Lesson One – Subscribe to and act on US-CERT alerts because if you don’t, hackers will.

The US Computer Emergency Readiness Team (US-CERT) is part of the Department of Homeland Security (DHS) and is responsible for much of that agency’s cybersecurity work. For example, the US-CERT publishes vulnerabilities on its websites and issues alerts to anyone who subscribes to their email list. Do you know if your IT department receives the US-CERT alerts? In Equifax’s case, US-CERT had published a vulnerability concerning the Apache Struts Web Framework in March 2017. Just two days after they published the vulnerability, hackers scanned Equifax’s systems and found the unpatched vulnerability. What’s the lesson? Hackers can use published vulnerabilities to scan your system and immediately verify if that “door” is open. The government publishes these vulnerabilities because they are already likely to be known by hostile nation-states and advanced hacker gangs, meaning that it’s entirely the responsibility of potential targets to patch these holes before they are exploited. If US-CERT publishes something, you should immediately check your own systems and patch them as soon as practically possible. What you should not do is sit on the alert and do nothing…

Lesson Two: Be reasonable with your response times to published US-CERT alerts.

But that’s exactly what Equifax did. The US-CERT alert went out but two months later the door to Equifax’s servers was still open. The question to ask is: Was this reasonable? Put another way, how much time can reasonably pass before an organization can be held liable for failing to know about a vulnerability and apply the patch? The Federal Trade Commission (FTC) requires companies to act reasonably when it comes to cybersecurity by considering (1) the size and sophistication of the company, (2) the type, sensitivity, and amount of data the company holds, and (3) the cost of controls. Is two months to notice a vulnerability being widely reported by the US government and patch it “being reasonable?” The FTC has not yet answered this question, but we can anticipate that it—or other agencies—will do so in the near future. And the behavior of organizations with any kind of sensitive data will shape the FTC’s actions.

It’s difficult to consider Equifax’s actions (or lack thereof) to comply any kind of reasonableness standard. Equifax left open the proverbial door more than two months after US-CERT announced the vulnerability, essentially inviting hackers inside. In fact, those hackers easily gained access thanks to the overlooked vulnerability and failure to patch it! And once inside, they were able to move laterally within Equifax’s network without anyone noticing. According to the GAO Report, hackers moved from  three databases they initially accessed and into more than forty-eight other databases!  They found unencrypted usernames and passwords, which gave them credentials and access, and then they ran approximately 9,000 queries to gain “PII” (or “personally identifying information”) from these databases. Using standard encrypted web protocols, they extracted data for seventy-six days before anyone noticed. An Equifax network administrator was doing routine checks according to the company, but a misconfiguration allowed encrypted traffic to pass through uninspected. How? There was an expired “digital certificate”, which are part of encryption systems, that had been expired for ten months before the breach! As soon as the expired certificate was corrected and the resulting misconfiguration corrected, investigators soon inspected network traffic and discovered the months of unusual system commands that had been executed completely outside of normal business operations.

Lesson Three: Use Equifax’s experience to inform your own cybersecurity and cyber risk management program.

A key component of any reasonable risk management program is identifying and acting on “lessons learned” from events like this. Here are some (but by no means all!) the “lessons learned” from the Equifax breach:

  • Identification and Notification. Not everyone who needed to know about the US-CERT vulnerability was notified. Why? Because the recipient list for system administrators was out-of-date, so not all necessary employees received notice. A week after the vulnerability was identified, a routine scan failed to detect this vulnerability in the online customer dispute portal, which is where the hack originated. Lesson learned: update your distribution lists and make sure you have a process for critical updates to (1) verify whether the update applies (or should apply), (2) verify the update was actually applied, and (3) verify that the update was successful. Though this may be time consuming, using a two-person system to double check your critical patches and update is unquestionably worth the additional effort when the safety of all of your data (and your consumers’ or third party’s data) is at stake.
  • Detection. The expired digital certificate prevented the networking inspection tool from inspecting and detecting the malicious activity. Since the certificate expired before the hack began, traffic was not inspected during the entire breach. Lesson learned: deal with expired certificates, especially when the expiration renders your network scanning tool utterly worthless. Trust but verify—your certificates and your networking scanning tools.
  • Segmentation. Individual databases were not separated from one another. The hackers’ initial entry was limited to three databases in the online dispute portal. Connecting databases, which probably did not need to be connected, allowed the hackers to gain entry to 48 databases—twelve to thirteen times as much data as was originally accessed! And the expired certificate allowed them to pull PII from ALL 48 databases unnoticed. Lesson learned: just as the principle of least privilege should be applied for employees, you should only connect databases and systems that have a business purpose for being connected. And put a limit on the number of queries—or at least a tripwire to set off an alert—on all databases containing sensitive or valuable data. If someone really needs to run 9,000 queries, then more than one person should be aware of this need. Another option is to build in an auditing mechanism based on the number or subject of the queries.
  • Data Governance. In addition to restricting queries, Equifax failed to limit access to sensitive information, including credentials such as usernames and passwords. The hackers gained access to one database that they leveraged to gain access to other databases because that one database contained unencrypted credentials. Lesson learned: encrypt credentials, or at least put them in a separate location where they cannot be easily located. At a minimum, this is a great opportunity to practice segmentation. A great case study is the NotPetya virus that shut down Maersk’s operations worldwide (and many other organizations). What saved the company? An isolated computer in Ghana that was NOT connected to their network.

Bonus Lesson: Additional Fallout from the Equifax hack.

Loss of Contracts. IRS, SSA, and USPS were all using Equifax services for their own operations for identity-proofing processing. They each responded individually to the Equifax breach because it was unclear to them whether any single federal agency had responsibility for coordinating their actions. That’s frightening. The IRS later canceled a short-term contract it had with Equifax based on the breach (they had already awarded a long-term contract to Experian) and Equifax responded by filing a protest. They lost the protest, but what if the IRS had not awarded the contract to Experian and they had continued on without knowledge because the previous contract had no breach notification requirement? Agencies like the IRS were using Equifax to verify identity and run credit checks—how much of this PII was provided to Equifax by the IRS, SSA, USPS—and Equifax is storing on its servers? And how was it being protected? Given that the credit reporting agencies contain more PII about individuals than the IRS and SSA, shouldn’t they at least follow minimum privacy standards under the Privacy Act of 1974? What about HIPAA? Or GLBA? Or even the self-regulatory PCI data security standard?

SLA Changes. Before this event Equifax was only required to notify the federal agencies of a security breach if the breach directly involved a system providing services to the federal government (and within a 24-hour period). But now the IRS and SSA require notification within one hour of any breach regardless of the system(s) affected. These contractual changes represent a sea-change in how the federal government looks at data security from contractors like Equifax.

DHS. So which federal agency should be helping? DHS is the center for federal information security incident prevention and response. It also assists private and public sector critical information owners and operators in preventing and responding to cyber incidents. In September 2017 after Equifax went public with its breach, DHS offered forensic analysis and breach response services (Equifax declined and used a private cybersecurity consultant instead). But don’t think this will always be optional in the future! Companies are answering to Congress about major breaches (Facebook, Google+, British Airways), and it’s likely just a matter of time before Congress starts mandating that federal agencies conduct their own investigations of private companies’ cybersecurity practices (see, e.g. GDPR and European Data Protection Authorities). In some ways, it’s already happening: the Securities and Exchange Commission took a major step in late September 2018 by fining Voya Financial $1 million over the data breach that exposed personal data of only 5,600 clients.

About the Author or Referenced Attorney

Melissa Van Buhler

Melissa advises clients in the areas of data breach and reporting, data privacy, intellectual property, online marketing, and commercial/technology transactions.

Focus on what matters. Focus on what works.