Thursday, May 29, 2008

A word about assumptions

This morning I was reading a chapter in Jerry Weinberg’s
Becoming a Technical Leader
. On the chapter on innovation, he poses the following puzzle:

A man hires a worker to do seven days of work on the condition
that the worker will be paid at the end of each day. The man has
a seven-inch bar of gold, and the worker must be paid exactly one
inch of the gold bar each day. In paying the worker, the man
makes only two straight cuts in the bar. How did he do it?


Stop reading now if you want to try to solve this.

The story goes on to explain the solution. The man cuts his gold into three pieces of the following lengths: 1 inch, 2 inches, and 4 inches. Very clever, because the idea is that the worker can now “make change” when getting paid.

I thought for a minute about how clever this was but then dug deeper. Why didn’t I get it? This whole puzzle hinges on an assumption. An assumption that we can foist off an unusual set of requirements upon the “user” (the worker). The assumption is that the worker will retain his wages every day and have them available to make change. Therefore, the burden of making the employer pay with exact change is removed. Nice.

Now I why I didn’t think of this? Because it’s counter-intuitive of me to introduce unexpected (and possibly contract-breaking) conditions into a solution. But in a wave innovation, Mr Weinberg did. But we can’t blame him. He’s a programmer and this is the kind of stunt that programmers are wont to do.

But back to assumptions. The lesson learned is: every problem drags along a set of assumptions. Sometimes the assumptions are as simple as “the default conditions” that we take for granted. And every solution also brings along a set of assumptions. It’s always a prudent idea to keep an eye on the assumptions. You never know what they’re going to tell you about the problem and the problem solver.

Thursday, May 22, 2008

The problem with our defense technology Part 2, “Advanced” technical controls

The next level up from basic controls, are what I’m calling the more advanced technical controls. These are the things usually used by the organizations who’d be sued if their security was breached. Again, this is the low-water mark list. And like before, most of these security controls are overrated, overly relied upon, or implemented narrowly.

Strong authentication
Strong authentication, by which we mean two-factor, by which we usually mean carrying a token thing. These are great replacement for passwords, but that’s about it. It all gets very interesting when you use a token to authenticate to a box that has significant vulnerabilities (see patch management). And for most strong authentication systems in place, I’ve found several work-arounds implemented by the system administrators just in case we get locked out. Thus begins the whack-a-mole game with the auditors and operations staff. And don’t think strong authentication will be helpful with man-in-the-middle attacks or phishes. I’m not saying throw the baby out with the bathwater, but I just remember that strong authentication is only an upgrade for a password.

Storage encryption
If your organization hasn’t encrypted all its laptops and backup tapes, someone in IT is probably is working her butt off trying to get it done. If you’re really advanced, you’re encrypting all your database servers and anything else that’s Internet reachable. Here’s a wonderful case of doing something so we don’t look stupid. Is there a problem with cold boot ram attacks against laptop encryption keys? Sure, but the law says if someone steals a laptop and it’s encrypted, I don’t have to disclose. And yes ma’am, the database is encrypted - but the password is in a script on an even more exposed web server in the DMZ. Whatever, the auditors demand the database be encrypted, so shall it be done. In any case, it’s safest to assume the breach - if an adversary has physical access, they are going to get in eventually.

Vulnerability scanners
Take patch management and now repeat with vulnerability scanning. It goes like this: scan your machines, analyze the results, find a hole (and you always will), request that IT patch the hole, request that IT patch the hole, request that IT patch the hole, insist that IT patch the hole, raise a major fuss about IT not patching the hole, IT patches the hole. And then repeat. And this doesn’t count the zillions of false positives because your vulnerability-scanning tool is banner grabbing instead of actually testing. No, vulnerability scanning isn’t worthless. Heck, anything that gives you some visibility into your enterprise is a good thing. But it will it truly give us battle hardened servers ready to take on the deadly sploits of the Intarnetz? No, not really. And depending who you ask, more trouble than it’s worth.

Logging
The vendor’s cha-ching. This is the security information management (SIM), security event management (SEM), etc. It’s the big box o’ log data. Essentially, it’s syslog on the front, database on the back, with some basic rules in-between. If you’ve paid a decent amount of money and/or time on those rules, then you’re only trying to drink from a lawn sprinkler instead of the fire hydrant. In any case, getting useful real-time information out of your logging system is a part-time job in of itself. Now there are intelligent log analyzers out there, but usually they cost around 80K a year plus benefits. Can automation? Get serious. There is simply too much data to make a decision in a timely manner. And remember, you are facing intelligent adversaries. The most useful automated intelligence you’re going to get out of logging system is a measure of the background radiation of the worms and bots. Now, again visibility is a good thing. I use my logging system for forensic detail after suspicious events. I also use it for trending and for showing management just how dirty the Internet is. But as an actual alarm system? Only if I’m lucky. And producing actionable intelligence? Not so much.

Like I was saying the other day...

Tapping a trend, or now just painfully obvious that's safe enough for anybody to say?

Antivirus is 'completely wasted money': Cisco CSO

In any case, I really didn't want to turn this into a ranty blog about all the problems with infosec. Sure enough of that to go around.

I promise to wrap up this "problem with" posts and get onto the meat of how to defend ourselves.

Tuesday, May 20, 2008

No Sith, Sherlock

U.S. corporations massively read employee e-mail:

41% of the largest companies surveyed (those with 20,000 or more employees) reported that they employ staff to read or otherwise analyze the contents of outbound e-mail.

Yeah, yeah... this has been going on for years. Heck, when I wrote Heidi Book 1, five years ago, this was old hat.

It's funny tho, people still seemed shocked by this. Not security people, of course. Usually it's the business folk and sales-critters. Y'know, the ones with the iPhones and bluetooth headsets... just basically screaming "Please snoop away!"

These are also the same people who don't care so much about protecting corporate secrets and claim not care much about their own. Of course, they would squeal a different tune if I were to do a Powerpoint preso on the personal ickiness I've seen fly across the corporate firewalls. Talk about Hawt mail.

So yeah, the trick is to show these people the link between protecting their sticky lurid personal data traces and PII. Some of the stuff I've seen is far more damaging to some people's careers than mere identity theft.

UPDATE: Great Minds Think Alike or a different spin on the same topic.

Tuesday, May 13, 2008

The problem with our defense technology, part 1

At best, our defensive technical controls do nothing but scrape off the chunky foam of crud floating on the surface of the Internet. At worst, they represent exercises in futility we do primarily so we don’t look stupid for not doing them. Consider the tsk-tsking that goes on if an organization gets hacked and it's revealed they don't have a adequate encryption or haven't patched some workstations. That's what I mean by stupid. Of course, if anyone gets hacked, there will be tsk-tsking anyway. Anyway, what have we got?

Basic technical controls
I am going to start with basic security technology, which represents the universal, low-water mark for security controls. Basic security tools are what everyone implements to achieve “acceptable security” because that’s what Management and the auditors expect. Usually when you want a tool that isn’t on this list, you have to fight for resources because it’s an unusual control that wasn’t budgeted for or worse, doesn’t directly satisfy an audit requirement. Many of these tools have a low entry cost, but often entail a burdensome maintenance cost. In some organizations, these maintenance burdens outweigh the defensive value of the control.


Passwords
If there’s any universal, ubiquitous security control, it’s the use of passwords. In fact, passwords are decent, cheap way to provide basic access control. Manufacturers build passwords into nearly everything, so it’s safe bet you’ll have them available to protect your systems. Where passwords veer off into something stupid we have to do is in the area of frequent password changing. The reasoning for around password changes is out of date, as on old fallacy about the time to crack a password. Gene Spafford explains it better than me, "any reasonable analysis shows that a monthly password change has little or no end impact on improving security!" Passwords can give some utility in exchange for relatively little overhead, provided you aren't mired in an audit checklist organization.


Network firewalls
In the past, the interchange most commonly heard regarding security went along the lines of: "Are you secure?" "Yes, we have a firewall." "Great to hear." Luckily, we've progressed a little beyond this, but not far. Most firewalls I examined as an auditor were configured to allow all protocols outbound to all destinations. Add to that, the numerous B2B connections, VPNs and distributed applications. Then there's the gaping holes allowing unfiltered port 80 inbound to the web servers.

When I was a kid, my family lived in Western Samoa. At the time, the local water system was pretty third world. My mom would tie a handkerchief around the kitchen water spigot. Once a day or so, she'd dump out a big lump of mud and silt, and then put on a clean hanky. After being filtered, she boiled the water so it would be safe for us to drink. That handkerchief? That's how I feel about firewalls. And people rarely boil what passes through their firewalls.

So, I'll have agree with Marcus Ranum, and the folks at the Jericho forum that firewalls are commonly over-valued as defensive tools.


Blacklisting Filters
Anti-virus, intrusion prevention, anti-spyware, web content filters... I lump all of these into the category of blacklisting filters. These types of controls are useful for fighting yesterday's battle, as they're tuned to block what we already know is evil. In the end, we know it's a losing battle. In his "Six Dumbest Ideas in Computer Security", Marcus Ranum calls this "enumerating badness." Now, I think there is some utility there for blacklisting filters. But at what cost? All of these controls require constant upkeep to be useful, usually in the form of licensed subscriptions to signature lists. These subscriptions are such moneymakers, that many security vendors practically give away their hardware just so they can sell you the subscriptions. Annual fees aside, there's the additional burden of dealing with false positives and the general computing overhead these controls demand.

Hey, raise your hand if you've ever had your AV software crash a computer? Uh huh. Now keep them up if it was a server. A vital server. Yes, my hand is raised too. But of course, you wouldn't dare run any system, much less a windows system, without anti-virus. You'd just end up looking stupid, regardless of how effective it was.


Patch Management
Best Practices force most of us to pay lip service to performing patch management. Why do I say lip service? Because organizations rarely patch every box they should be patching. Mostly by patch management, we mean we're patching workstations - smaller organizations just turn on auto-update and leave it at that. But servers? Well, probably if the server is vanilla enough. But no one is patching that Win2K box that's running the accounting system. And what about those point-of-sale systems running on some unknown flavor of Linux? Heck, what if you've got kludged together apps tied together with some integration gateway software from a company that went out of business five years ago? What about all those invisible little apps that have been installed all over the enterprise by users and departments that you don’t even know about? Are they getting patched within 30 days of release of a critical vulnerability? Bet that firewall and IPS are looking real durn good right now.

My favorite part of Best Practices is to watch the patch management zealots duke it out with the change management zealots. "We need this service pack applied to all workstations by Friday!" "No, we need to wait for the change window and only after we've regression tested the patch." (To tell the truth, I'm on the change management side, but more on that later)


Transmission encryption
Everyone knows if you see the lock on a website, it must be safe. We've been drilling that into lay people's heads for years. Yes, we need to encrypt anytime we send something over the big bad Internet. But what is the threat there really? We're encrypting something in transit for a few microseconds, a very unlikely exposure since the bad guy has to be waiting somewhere on the line to sniff the packets and read our secrets. Consider how much trouble the American government has to go thru just to snoop on our email. If the bad guy isn't at the ISP (which I'm not saying is unreasonable), then it's difficult to intercept.

Now consider this bizarre situation - you put up a web site and there is a form to put in your credit card number and hit submit. Wait, there is no lock on the site, I'd be sending the card number in the open! Oh dear. No, actually, the website has put the SSL encryption on the submission button so that only the card number gets encrypted. Of course, your browser can't show you a lock for this. Now consider the opposite - an SSL website, showing the lock and everything, where the submission button activated an unencrypted HTTP post. So now you have exactly the opposite, something that looks safe that isn't. And yes, as a web app tester, I've seen this before.

My last word on transmission encryption - I'd prefer to encrypt on my own network than on the Internet. Why? Because if someone's breached me (what was the title of this blog again?), it'd be very easy for them to be in a position to sniff all my confidential traffic. Especially the big batches of it, as things move around between database servers and document shares. So yes, if I was able to ignore the fear of looking stupid, I'd encrypt locally first before dealing with Internet encryption.


Next up: The problem with our defense technology Part 2, “Advanced” technical controls

Introduction

Over a long series of posts, I plan to explore thoughts around the next generation of information security. The title of the blog comes from a discussion with the many of my InfoSec mentors, who have implored security professionals to “assume the breach” when managing their enterprise security. Eventually, all defenses are breached. What do we do then?
I’m going to start with a quick overview of the problems. Nothing original here, just a breakdown of what’s going wrong. I’m usually the first one to tired of all the curmudgeon’s tossing bricks at our glass houses of best practices. My response is along the lines of “yes, I know. But tell me how to fix it?” Well, I do intend to propose some solutions.

Wednesday, May 7, 2008

Why I don't go to most security conferences

First, let me define security conference. By this, I mean, the conference that either has a hax0ry name or is simply an acronym. Okay, I gotta pay for a ticket, expend travel resources, and then lodging. Even if I can convince my employer to pay, I still have to burn political capital and then finagle time away from the office. TANSTAFL. So, when I see that announcement for Plopc0n 5 fly across my e-mail, I do my cost-benefit analysis and usually decide to skip it.

Why? Let's set aside the vendor hype-fests. They're too easy to bash. Besides, I can get all the vendor love I want by simply answering my constantly ringing phone.

What is at a typical security conference? Well, there's usually some forensics stuff. Cool, but that's really not my bag. And honestly, most of what speakers present as "forensics" wouldn't stand up under a halfway-technical defense attorney's cross-examination. Pass.

All right, there's a mixed bag of privacy and legal talks, which are mildly interesting, but are highly dependant on the speaker. Most of the time, the speaker's book or blog gives me the same basic information.

But what else do conferences full have? It seems that a good third of the content is "Hacking XYZ" or "New way to exploit" or some attack against physical security. BFD. I already know there are holes in my network. Most of these "new" attacks are just new variants in old attacks. Attacks that you can figure out are there just from looking at the basic design. I've read enough Ross Anderson to grok the basic idea on how things can be exploited and how they should be engineered. At best, the hacks they demonstrate are proofs of concept to something I'd already assumed I had to deal with. Thanks for that, but I don't need to attend just to see a proof of concept. I'll just grab the press release, usually released within hours of the conference demo.

I guess the biggest reason why I might be inclined to go is to network. But the last few conferences I've been to, I felt I was the only "adult" in the room. Yeah, except for a few Internet blogger friends, I'm really not compelled to spend the time away from work and family. I do hit a couple of local quarterly security conferences for the networking.

What am I interested in seeing? Radically new defensive technologies, "game changing" strategies, and thoughtful analysis of cyber-criminal operations. If I'm lucky, I'll see one or two of these kinds of pearls in several days worth of chaff. Nice, but I'm staying home for now.

BTW, if you haven't read With Microscope and Tweezers: An Analysis of the Internet Virus of November 1988, then I suggest checking it out. I bet you get a lot more out of it than the average hacking demo.