Sunday, September 30, 2007
Wednesday, September 26, 2007
Uh, yeah... While I concur that wireless is being used inappropriately in some areas (see my comment on his page), that statement didn't help Dale's argument much. (heh)
I'm not saying that I don't believe that the condition exists. People (and therefore organizations) tend to take the path of least resistance, so if the penetration testers don't ask, the customer is not going to offer up the information.
My surprise is that the question just doesn't come up. It may be because I'm the type to take a packet sniffer to a CTF contest. (Yeah, one of those that thinks that CTF is a spectator sport.)(I have Don M. at ODU and S-14 (hiya Pete!) to thank for that "bad habit".) To me, the "What did you see?" question is just so obvious that it's a "must ask".
I can also see how organizations fall into the practice of not participating in their own penetration testing. It may have something to do with that other form of security testing called the vulnerability scan. It's usually performed more often and requires no input from the customer, except during the remediation phase, and that is usually an internal process (e.g., the CIO may have some "'splaining to do" to the CIO).
The Hansen/Ranum/McGraw reference to the "badness-o-meter" is a good one. If your pen-testers have anything other than "we don't know" at the top end of the scale, the data they're providing about your level of security may be suspect. Pen-testing is an inverted business-model. The best you can hope for is: "We don't know. We failed." A few things to keep in mind:
- This doesn't mean that someone else doesn't already know
- It also doesn't mean that they won't know tomorrow or the day after
- To quote a semi-cliche: "Security is a process, not an end state." (Dr. M. E. Kabay, 1998)
- By extension, a pen-test is a snapshot of that process, not of an end state
Sunday, September 23, 2007
My initial thought was "somebody is selling something". Upon reading the article (follow it to the daily blog to see the link), I discovered that I wasn't wrong. The reason for the articles existence was to make you overly paranoid about your users and get you to buy something to counteract the threat. If that purchase just happened to be the product mentioned in the article, so much the better!
My second thought was that this was another in a long line of "security by fashion statement" (bowel) movements. Think about it. We have a number of firms where "analysts" (those that aren't practitioners but are somehow (mysteriously) more knowledgeable) declare that one security method is "auld schoole" and there are much better, more modern, methods of performing such and such a function.
It's quite annoying. In the past five years, we've been told:
- IDS's are dead, IPSs are better (thank you Gartner)
- Anomaly detection is better than IDS/IPS
- the firewall is dead
- the perimeter is dead
- SSL are the best VPN's
- stateful inspection is better than application proxies
- deep packet inspection is better than application proxies
- application proxies are better than stateful inspection, packet filters, and deep packet inspection (What? You missed the resurrection of proxies by Gartner?)
And now you need to be so paranoid that your users' every key stroke needs to be monitored and analyzed for intent (yeah, that works well), to the degree that you must come up with "termination plans"? Oh and, by the way, we just happen to have this nice product that'll automate this process and make your life much easier.
A much better approach would be to have a realistic security policy and to use the tools you already have, especially the one behind your eyeballs. Most "insider threat" incidents are considered corporate embarrassments not because the incident occurred but rather because they weren't detected until after the fact. The majority of insider abuse is readily apparent, either in the virtual world (in log files) or out in the real world (people tend to talk about what so-and-so is getting away with).
Attempting to totally automate the process, in either the virtual or real worlds, is just a way of abstracting yourself further away from the problem. Network monitoring and management of people have at least one thing in common, they "automate" poorly in that an automated process can only handle "known" issues. Unique issues can always crash automated processes. (It's why we have web-based time sheets but still have entire HR departments.)
You want to properly deal with the "insider threat"? It's easy. Show "trust" in your users. It's okay to "verify" but a certain degree of monitoring but it has to be at a level that your users are comfortable with.
Also, use the tools that you already have. Automated log file reduction is fine, but you still need human review of the remaining entries.
The firewall, the IDS, and security boundaries are still valuable. So's enforceable policies, deep packet inspection, stateful firewalls, and anomaly analysis. They each have their place in your toolset.
Companies such as Gartner like to bank on the fact that you've forgotten that none of these technologies are mutually exclusive. While "layered defenses" may be an offensive term to some, the existence of multiple protections which co-support an overall security policy is still a good idea. Just don't take the human factor out of it.
I've got news for you: If you run a totalitarian environment (AKA micro-manged, micro-monitored), every single one of your users will be evil and you'll end up wondering why your organization has such a high turn-over rate.
Save your cash. Also, keep in mind that the less flexible a system is (the degree of tolerance it has), the more brittle it is and the more spectacular the failure will be when it does go. This goes for machine systems as well as for people.
Thursday, September 20, 2007
Not any more. I've needed to install Fedora for a few toolsets that I've wanted to play with and finally had the time (I took a day off) to install Fedora and figure out how to get the video configured properly (usually it'd come up with bars on the side and no mouse cursor).
Fixing both of those problems was pretty straight forward. The mouse involved turning off the hardware driven cursor. The video involved trashing the Fedora drivers and grabbing the binary off of NVidia's site and letting it compile new modules.
I've stuck my notes in the wiki.
Sunday, September 16, 2007
Saturday, September 15, 2007
Wednesday, September 12, 2007
Monday, September 10, 2007
The IPv6 work would be more directely related to the "Attacks" class. Rob suggested it knowing that I'm one of the few students with IPv6 at home.
I'm interested in the FastFlux problem but I'm wary of where it might lead (remember, the problem is based on problems within the domain registration infrastructure). Then, too, it may also run into one of any number of dead ends as there is a massive bureaucracy between ICANN and the hosting providers, with the registrars in the middle). Without the ability to subpoena a number of people, investigation is limited to what you can extract via the local terminal window. Corruption at the hosting provider or registrar makes it that much more difficult.
I'm a bit discouraged but not yet put off by that. Initial investigation of two FastFlux domains shows a massive number of systems attached to the Storm Worm (amazing since, for most of those boxes, someone had to click on "Click here" to get infected).
In any case, I've got to choose soon. Rob's deadline is coming up fast.
So far the install has included:
All this before even compiling pvrusb2, MythTV and Zoneminder. Luckily, most of the above could be done by sitting down at the console every 20 minutes or so. It is a bit tedious though. Makes me think that I should have tried one of the Zoneminder LiveCD's first. (I didn't because there's a number of things I want to do that probably aren't in the LiveCD.)
Thursday, September 6, 2007
How about a thousand?
Seven hundred fifty thousand?
It's actually very easy to do. Remember Gnutella? Google does. Sheesh! And you thought the RIAA had to do something sneaky to get it's target IP addresses.
Hint: If you must view those links, I recommend clicking on the "Cached" link as most of those entries are offline at the moment.
Wednesday, September 5, 2007
Any help (or pointers to documents other than the ALSA wiki) would be greatly appreciated.
Sunday, September 2, 2007
Saturday, September 1, 2007
I learned about all this via the installation of Google analytics. It adds a number of behind-the-scenes accounting features that have confirmed a number of suspicions about visitors to the site and has pointed out a few other new data bits (such as SpraakService).
Looks like the wiki may have picked something up in the translation... (heh)
In any case, the CTF was today. I captured two of the team flags. We didn't take first (or even second) but we had a very good time as we were doing it (translation: the rules didn't prohibit adding content to the web pages). To whomever it was that left the ptrace-kmod exploit laying around in one of the user accounts, thank you. I was able to repair the bug in the source code and use it.
In any case, my son is fine (if you don't count him being a 200 pound assinine eating machine when he's on steroids) and I have roughly three months to recert GSEC and six months to do my GCIH.
I also picked up quite a few topics for research during the SANS class (tracking FastFlux, tracking browser header alteration by spamware, etc.). I'll need them as I decided to crash Rob's Attacks class since we couldn't get enough participants for the Continuing Case Studies in Forensics. Maybe next year?
Thanks to the others in the fourth row/left side of Ed Skoudis's class this year. I enjoyed the class/exercise.