It’s a funny thing, all this emphasis on risk assessments and risk management. Listen to some parties and you’d think this is some magical whiz-bang marvel of engineering and decision analysis that will save us all. And yet, when you get right down to it, very few people seem to truly to understand what it is they are talking about. If I had a nickel for every time someone asked about risk assessments and yet couldn’t describe a basic risk management process, I’d probably have enough money for a bottle of Eagle Rare 10.
Why is it that we seem inclined to over complicate what should be downright simple for most daily circumstances? First off, there’s a lack of general understanding, education, and awareness about what is really involved in risk management. Second, because of our general ignorance on the subject (within the IT universe), risk management takes on an almost mythical quality to it. And, finally, given our lack of understanding and our almost ideological worship of those who seem to know what they’re talking about, it leads to a combination of analysis paralysis (not being able to make a decision because we can’t face the basic decision management objective given the dearth of data) and hopeless despair.
What’s an IT or infosec professional to do? And, heaven forbid we push this topic up the food chain!
To me it seems like there are three basic components we can use to make a reasonable assessment, and we need not even escalate too far into formality for most everyday decisions so long as we can articulate a foundational rubric based on these three pieces. Specifically, it is my belief that data classification and relative importance, evaluation of relative environmental exposure, and estimation of relative vulnerability severity can provide a quick shorthand for determining whether a concern needs to be addressed or a tool deployed.
Allow me to elaborate…
Data Classification and Relative Importance
At its roots, this topic equates to what might formally be described as a business impact analysis (BIA). In a BIA, which is a major component of the “context” stage of the risk management process (if you subscribe to the ISO 31000 model), you’re establishing a baseline relative importance and sensitivity around a system or environment.
For example, if you’re dealing with regulated data (PII, financial details, payment card data, etc.), then you’re automatically going to amplify your attention to the environment (or, you should), because a loss of any sort in this space will have a material impact to your business (either through lost business, fines/judgements, reputation damage, or various other secondary effects).
On the flip side, if you are looking at a system that’s merely brochureware and not particularly connected to anything, then you may not place too high of a general value on the data or environment, even if a compromise or defacement could result in minor reputation damage.
It’s interesting, in speaking with people about risk management and risk assessments, that they often jump right over this all-important context-setting step. If you don’t know the relative importance of an environment to the business, then how can you even begin to evaluate the appropriate level of security controls or practices or tools to deploy against it?
On the flip side, starting with a look at the relative importance of an environment can be highly enlightening for areas where we might assume a nominal “risk” to the business. For example, while we might naturally expect that the point-of-sale (POS) systems for a retailer are particularly important, do we also include the network ports as part of that evaluated system/environment? What if someone plugged into that network port? Oops… now you’re allowing access to what may otherwise be a dedicated, segregated payment card environment.
Defining, understanding, and evaluating context is very important. Data classification policies can provide a key starting point, but a rubric should be developed overall to help quickly baseline a BIA for a given environment to help make better, more defensible decisions.
Relative Environmental Exposure
No, this is not a comment on nudity or the potential inappropriateness of trade show marketing booth personnel. What we’re talking about is understanding where a system and/or data resides within the environment, how it moves around, how it’s access, and what sort of controls might exist on that access. For example, a database that holds very sensitive data, but that exists within an isolated enclave on RFC1918 IP space may be less vexing for your organization, than a web app based on tomcat and java that sits on a public-facing DMZ.
One of the great annoyances I’ve experienced over the years is reviewing audit or “security assessment” reports that are thinly veiled vulnerability scan reports with no environment context whatsoever. Really, auditor, you think that old NT4 box that’s sitting on a cart in a lab with no ingress or egress access represents a “critical risk” to my environment? I submit you don’t actually understand what a “risk” really is if your conclusion of “high” or “critical” “risk” is based purely on a vulnerability score (or CVSS rating) and doesn’t include understanding the environment exposure (which is, btw, a major component of CVSS, if you do it properly).
Once you understand the relative importance of the data and/or environment, then understanding where that is placed within your overall IT architecture is key to helping estimate “risk,” which means looking at how exposed that environment is, either directly or indirectly. Note the last part there (“indirectly”), because it’s a very important nuance to this component. Just because an environment isn’t directly exposed on the internet does not mean that it isn’t still exposed. Poorly written apps can give way to various types of attacks, such as SQL injection and cross-site scripting (XSS). If calls can be forwarded or chained through to the back-end system/environment that holds the important information, then you must still account for that degree of exposure.
Relative Vulnerability Severity
Last, but not least, is everybody’s favorite game: How bad a bug is it, really?
We’re constantly buried with a deluge of the vulnerability of the week/day/hour/minute (Adobe Flash anybody???), yet within context, just how bad is it really? For example, if you’ve banished Flash from your environment completely, then do you care about the latest Flash vulnerability? Obviously not (so long as you know that nobody has installed it on their own).
The same general rules apply to your production environments. If you know the relative sensitivity and importance, and you know the relative exposure (direct and indirect), then vulnerability (or weakness) severity provides the final piece in the risk assessment puzzle. Is this a low-grade data enumeration weakness, or are we concerned about instant, deep, and immediate compromise of a system (or data)? Are we talking about the crown jewels, on the display for the whole world, there for the taking? That’s perhaps the right time to move double-time. Added complexity and steps required to execute the compromise? Then you likely have some breathing room.
These three general concepts – data classification and relative importance, relative environment exposure, and relative vulnerability severity – work together to form a functional concept of “risk” within your IT environment. At the end of the day, this is a very important objective to achieve as it will improve decision quality and overall defensibility of decisions.
Note, by the way, that none of this discussion really touches on risk remediation, selection and implementation of tools, etc. Those steps should come later, and should naturally follow discussion of the key components discussed above. As a general rule, discussion of security controls of any sort should come after context-setting and risk assessment, when a reasonably complete picture has been formed.
For information on how you can prevent your organization from being breached, visit www.miltonsecurity.com or call 714-515-4011.