Thursday, August 2, 2012

Hofstadter's Corollary on Remediation


Any long-time readers of this blog know that when I get a pen-test or vulnerability scan report, I am usually displeased.  One of the things that get under my skin are the apparently wild-guess remediation estimates they put in for discovered problems.*


These estimates are usually done by a third-party (consultant, auditor, industry busy-body) who has not asked any detailed questions about my operational, development or business processes. They rarely even understand the business value of the service they're evaluating. They have no clue about the current workload or pipeline. Yet somehow they can pop off in a report that goes to my boss, my customers, and my regulators proclaiming that it shan't be any trouble 'tall to fix.

Now any of us in the security business for more than a year knows to take anything an outsider puts in a report with a grain of salt or two.  Unfortunately, our boss, our customers and our regulators tend to take these reports as gospel.

And then I hafta splain why it take so darned long to fix those things. I swear, sometimes I feel like Admiral Hopper explaining nanoseconds.

So here I present "Hofstadter's Corollary on Remediation".

Let's start with Hofstadter's law, which is:


Hofstadter's Corollary on Remediation states that remediation efforts are always longer than estimated by an outsider, even when they take into account Hofstadter's Corollary on Remediation.

But why is this? Let's unpack. Here's an example:

"Finally, let's estimate that each SQL injection vulnerability will require 40 developer hours x $100 per hour to fix, or about $4,000 total in labor costs." - Well Respected Industry Smart Person

Variable and fuzzy costs

Is that all it takes?  Well, no.  Let's put aside opportunity cost - which means you stall the development pipeline of requested customers features - a large but kinda fuzzy cost. 

And let's also put aside the flippant assumption that fixing a SQL injection vulnerability is just that simple and there are no underlying major software infrastructure components that need to be worked out. It happens but let's just leave that aside because it's also hard to quantify until you dig into the particulars.

And let's also not include the possibility that an organization has vulnerabilities because the whole development process is fubar - which actually is likely in places with lots of discovered vulnerabilities because of Boulding's Backward Basis - but that's also pretty variable for now so let's leave that aside too.

Known costs

So, let's focus on the tangible costs. For one, most organizations that care enough about security in their products usually also have very defined operational processes.

And some of us on highly regulated industries can't take advantage of new-fangled rapid deployment models

So any code change means it goes through release planning, functional requirements capture, use case development, specification development, specification review, test plan development, test case development, coding (what was estimated in quoted estimate above), code review, feature testing, regression testing, integration testing (sometimes against KLOCS of of code and third-party libraries), readiness review, release management, package development, documentation of release, site deployment to user acceptance testing, site deployment to production in an acceptable change window, site deployment to recovery sites.  Each one of these steps is at least a few hours of someone's time. And most painful of all, this whole wheel takes several months to turn... even for a seemingly minor code change.

In other words, remediation will slam right up into the immovable object of business requirements. 

A more simple example of this - You find a vulnerability on mail site - do you fix it immediately but in so doing shut off your CEO's email while he's at a customer site in tense negotiation?  Ummm. It's not an easy call and nor should it be.

But wait, there’s more

Suppose what you find something that appears minor, like a single XSS on a help screen.  Well fine and dandy, a quick fix. But wait, your developers just certified their product as 100% OWURST compliant with the Brand Spanking Impenetrable Cross-Site Scripting Defense Force-field.  So how did this XSS bug slip through?  You may have an endemic problem you weren't aware of.  And it's like that there are probably more of them to be found.   So now a minor remediation effort becomes a major bug hunt. Or it should if you're Doing The Right Thing(tm).  And some bugs, like XSS, are often part of the overall I/O engine of an application which may entail a major overhaul.



Now why would an outsider would stamp low remediation estimates on things they knew we have to hand craft a solution for?  I'll just slice it with Hanlon's razor.  But...

What about vendor-patched vulnerabilities?

So far I've just focused on the big and nasty vulnerabilities found in apps that you're responsible for fixing yourself. 

Suppose you have a minor vulnerability in your Pythia database server.  Well, you can just download the fix. Remediation effort: low.  Just apply the patch. 

Whoa boy, can't just do that.  You're talking about a production system on a high-volume financial transaction system.  We gots a procedure here. A procedure that's checked and audited with paperwork up the wazoo. Can't do squat without six manager's sign-off, two dbas to test queries to make sure nothing done broke (regardless of how banal the change actually is), at least a full month of regression testing, and then finally waiting for the appropriate change window which comes once a month at 3am on a Sunday.  When the moon is blue.

Remediation effort low, my ass.


* Remediation guestimation is the second most annoying thing in assessor reports.  This is the first.



2 comments:

Anonymous said...

While I agree with nearly everything you've said, the fact of the matter is the consultant needs to (should) convey some idea of the relative effort to remediate the vulnerabilities found. Every vulnerability isn't as important, nor as difficult to fix, as the others.

As a consultant, I can never know all the things that might influence the effort required (and you're certainly not going to pay me to find out), so I have to go with some sort of scale. "Low ,med, high" may not be accurate, but "High, very high, very very high" isn't any better.

Planet Heidi said...

Agreed that as a consultant (and I was an infosec consultant for 7 years so I am familiar), it's very useful to scale the vulnerabilities. Things I'd be happy with: rating the severity of the vulnerability and not necessarily add an estimate of remediation effort.
Or caveat-ing the remediation effort - "Relatively easy remediation based on average organizational resources etc."
Or gathering SOME information from the client before answering... as opposed to guessing completely blind (which has been my experience with all vendors so far)