Friday, March 20, 2009

Mapping the Unknown Unknowns

There comes a time in an InfoSec professional’s career when they’re forced to do a risk assessment. I know, they’re a big pain in the butt and no one ever reads them, but some people seem to think they’re kind of important1. I say if you’re going to do it, you might as well get some use of the thing.

First of all, I’m not going to explain some formal risk assessment methodology. There are far too many out other sources out there for that. What I am going to talk about is the general stance you bring to an analysis. As the poet Rumsfeld said, how do we deal with the unknown unknowns. This is where your prejudices can color an analysis and you could miss something important. Hopefully by better defining the known unknowns, we can shrink the size of the unknown unknowns. Here’s where I start:

Who is qualified to be working on this?
1. You? Do you really understand what is going on here? Were you paying careful attention to what was presented? One way to check yourself is paraphrase things back. Seriously, I can’t tell you how many times I’ve starting solving the wrong problem simply because I misunderstood what I was being told.

2. Are the people giving you data qualified to give you what they’re giving you? Nothing seems complicated to the person who don’t know what they’re is talking about.

How are people involved?
1. Generally, the more people are involved, the greater the chance of error. And hastily implemented automation can magnify that.

2. Will people have the opportunity to take reckless actions? Recklessness boils down to knowing what a reasonable person should have done, knowing the possible outcomes but going ahead and then doing the dangerous thing anyway. I’m willing to say this is somewhat uncommon in infosec because people rarely understand what a reasonable person should be doing, or the real probability of a bad outcome.

3. Speaking of reckless, how can someone’s personal misjudgment compromise the entire operation? For example, one guy surfing porn could bring down a costly lawsuit. You need to be aware if those kinds of situations exist in whatever your examining.

4. Can you truly say you understand all of the user’s intentions, all of the time? Unless you’re Professor Charles Xavier, this is another unknown that should be considered.

How is technology involved?
1. Software will always be buggy; hardware will always eventually fail; and operational and project emergencies will always occur. What happens when it does?

2. If you’ve got a track record of the technology involved, it’s helpful to look not just at the failures but the “near misses”. How many close calls were there with that tech and what could have happened if it had gone pear-shaped? Just because it worked up to now, doesn’t mean it will keep working.

3. How polluted is the technology? Is it well-maintained and well-understood? What are the dependant linkages? How many moving parts, software or hardware? How resilient is the system to delays or failures? How many outside parties have their fingers in the system? Are you sure you’re aware of all the outside parties and linkages?

Some specific known unknowns about technology
1. The systems you don’t know about
2. The data that you didn’t know existed
3. The systems storing data that shouldn’t be on that system
4. The connections you don’t know about
5. The services on those systems that you don’t know about
6. The accounts, privileges or firewall rules that you don’t know about

Conclusions
These are all things that you will need to account for when you’re doing a risk analysis and filling out those worksheets or forms. And hopefully the solution deals with these things in one way or another – if nothing else at least accepting the risk that these things exist and crossing your fingers.

All of this stuff can take a while to keep in your head, but I’ve extracted a few insights from this process to keep me on track:

o It will not always be obvious which technologies or processes are relevant to the security of a system. Follow the money (or data, or control).

o It is difficult to maintain a secure, operational system in a changing environment. Assume things will get broken and be prepared to deal.

o Listen to complaints. Make sure there is a way for complaints to get to you, from both the people and the systems. Even if the complaints are wrong, they’re complaining for reason. Figure out the reason.

o There will always be people in positions of trust who can hurt you occasionally

o Security policies should allow for workarounds

o Workarounds will create vulnerabilities

o There will always be residual risks

o Assume everything is insecure until proved otherwise (see name of blog)


1 Okay, I’m kidding and you know it. You can probably get through your entire career without doing risk assessments. Just keep buying firewalls and hope for the best.

Wednesday, March 18, 2009

Build vs Buy - the auditor's perspective

Sat through a comprehensive demo of IBM's Tivoli Compliance Insight Manager. Overall, the product is another SEIM, which means it aggregates logs from a wide variety of servers and lets you write queries against data. In short, if your servers are configured to see something and log it, then you can alert and report on it. That's all well and good.

Here's my problem - My requirements include pretty tight change control oversight. I need to be able to confidently tell auditors that I am aware all of unauthorized changes to my systems. Now here's where the rubber meets the road: our team developed a customized change control monitoring system that's part log scraper, part file watcher (ala Tripwire) with some dashes of config dumping-&-diffing. It's laser-focused to our environment, our apps, and the types of work (and mistakes) our Operations team does. It produces a daily report that's mostly readable that gives me a very accurate answer to the question "what changed yesterday". Even when the system has problems, the data is still captured and flags about the errors are usually thrown.

But, and this is a big BUT -when auditors see the report and see that we developed this system in-house the suddenly become very inquisitive. "Oh, it's home-grown. Well, we need to test it." It's not trustworthy. Every piece of the system is in question. Okay, that's understandable and we do our best to deal.

However, if I were to buy this IBM system (or any professional system), would the auditors feel the same way? One would hope they would have some doubts about how the system was implemented and how accurately it monitors. So far in my overview of vendor landscape of these types of products, I've found no particular product has the monitoring coverage we need. So if I were to buy a single system (and I really could only afford a single system of this magnitude), I know for a fact that I'll be missing about 20% of the changes being made on my network.

What I wonder is this: what is the real value of one of these professional change management tools? I suspect it's the trustworthiness of the brand name. I know I've been through this argument before with open-source homemade firewalls versus professional products, but at least the products go through some kind of testing (Common criteria, ICSA, etc). Moreover, that still doesn't address the concept of "best fit.” We all know that in-house works better (but can be more costly to maintain) than COTS products.

For the matter of change control, I felt that best-fit was more important since I needed (according to the auditors) to be able to confidently assert that I was aware of all changes. If I bought something off the shelf, I wouldn't be able to assert that (they're only catching 80%). I could buy something and then implement some homegrown stuff for the remaining 20%, but frankly, the effort on our part is about the same as just writing the whole thing ourselves. Plus we have the added bonus of being to adapt to infrastructure changes better than a canned product.

I wonder how many auditors out there will see the product with it's fancy dashboards and professional reports and go check the box "monitoring - compliant" and never question how well the system fits the environment? I bet a whole lot more than those who will needle me relentlessly on the effectiveness of our internally-developed system. So the real question becomes: is the cost of a canned product worth the cost of making the dimmer auditors leave me alone?

Tuesday, March 3, 2009

Snappy answers to vendor bullwash.

I hate dealing with slippery vendors, especially the ones will be handling our confidential data. Here's some snappy answers to their weasely questions.

Q1) "No one has ever asked these questions before."

A1) "Either you're not been as clear to me as you've been with others or no one else has been as thorough in their investigations as we are. Now can you please answer the question?"


Q2) "Look, BIG-COMPANY-NAME does business with us and they don't have any problems, so why do you?"

A2) See A1


Q3) "Why are you asking for that? Legally, we're only obligated to do half of that."

A3) "Because my requirements exceed that of the general compliance requirements and fall into tighter compliance requirements such as HIPAA, PCI, etc."


Q4) "Sure, we do that all the time. But look, we can't modify our agreements to show that. It's too much legal overhead, especially we use the same contract for everyone. But I promise, we'll actually do that."

A4) "How about we don't sign any agreement at all. But don't worry, we promise to pay you on time."


Q5) "Here is our SAS-70 management report. And we get quarterly pen-tests too. Aren't we great?"

A5) "I'm very impressed by all your certifications and audits. Can I see the actual reports instead of just the executive tear-off? Can I share the reports with my external auditors?"


Q6) "Oh, we don't have any third-party risk management practices simply because we don't use any third-parties. Why would we trust a third-party ever?"

A6) "Who cleans your offices? Do you run your own Internet and phone cables? Do you manufacture all your own software and hardware?"


Q7) "Oh that item in the agreement? That's just in there because the legal made us put it in. We've never had invoke that."

A7) "If it's not going to be invoked, then remove it. Otherwise my legal will insist that we treat that requirement as if it will be invoked. So we need to clarify what is going here a lot more."


That's all I could come up with off the top of my head. I'm sure I'm missing some classics. Feel free to leave your own snappy answers in the comments.