From high school grades to thermonuclear crises, Hollywood would have us believe that there is instant access to anyone’s computer system. We grumble at the persistent reports of our personally identifiable information (PII) being hacked, while we blithely give it away in order to have free mobile apps. In truth, a lot of effort and expense goes into keeping data from being exploited. In spite of firewalls, complex passwords, and other preventative practices, break-ins occur on systems that were thought to be adequately safeguarded. Any system that is being maintained (patched) regularly stands a reasonable chance to keep up with exploits as they evolve. Once support is discontinued, as in the case of Windows XP, then Hollywood’s premise becomes more and more likely. Such is the case for a legacy system.
What is a Legacy System?
Legacy applications, databases, and systems are those that have been carried over from languages, platforms, and techniques that were developed with earlier (now obsolete) technology. For example, mainframe-era data was physically limited to 80-byte records (on punchcards). In order to maximize the space, all date fields used a two-digit year, rather than a four-digit. This was not a problem, until the “Y2K” scare in the late 1990s. At this time, all applications and databases had to be updated to accommodate a four-digit year.
Many organizations have legacy applications and databases that serve critical business needs. Such a niche need might be a repository of historical data, such as email — or an obsolete, cross-platform interface. At some point in the future they will be subsumed by “new and improved” technology. Until then, these old, static systems, with known vulnerabilities, may be broken into and exploited.
Assessing Your Legacy System for Vulnerability
Certain steps can be taken to reduce the vulnerability of any legacy system. First, methodically evaluate all known risks; secondly, remediate what you can; and thirdly, question the potential points of failure throughout the data flow.
Whether your industry is public or private, the best methodology for evaluating known risks is the Security Technical Implemetation Guide (STIG), provided by the Defense Information Systems Agency (DISA). For example, a database team can set up and execute a package of scripts. The test results are categorically ranked, which can be used by the team to prioritize which risks to address.
Make sure to remediate what you can. Some issues may be corrected without risk to the application or database, such as how frequently a privileged user password has to be changed. Another important practice is to set access privileges to the least required. Perhaps the data at rest, and the data in transit, can be encrypted. Or consider Oracle’s data redaction product, if your site copies production data to a non-production environment for testing. The customer, of course, has to decide how much risk they are able to tolerate. Within the DoD, STIG remediation is a mandatory and high-visibility process.
My third step is simply due diligence, to detect and minimize the risk of internal failures. For instance, the team may have a backup procedure that’s gone unchanged for years. The log may be checked daily for completion. We assume it’s correct, but when was the recovery last tested; or is there any control over the library of authenticated scripts? Are there any inflows or outflows of data that are not well understood? Wherever feasible, the vendors’ security patches should be applied on a timely basis. Oracle, for example, releases a quarterly patch set. Once the changes are evaluated for potential impact, the patches should be applied at the first opportunity.
In some circumstances, such initiative may be inappropriate. Until the legacy system is no longer needed, however, such proactive efforts demonstrate to the customers that we are guarding their assets as if they were our own.