My initial thought was "somebody is selling something". Upon reading the article (follow it to the daily blog to see the link), I discovered that I wasn't wrong. The reason for the articles existence was to make you overly paranoid about your users and get you to buy something to counteract the threat. If that purchase just happened to be the product mentioned in the article, so much the better!
My second thought was that this was another in a long line of "security by fashion statement" (bowel) movements. Think about it. We have a number of firms where "analysts" (those that aren't practitioners but are somehow (mysteriously) more knowledgeable) declare that one security method is "auld schoole" and there are much better, more modern, methods of performing such and such a function.
It's quite annoying. In the past five years, we've been told:
- IDS's are dead, IPSs are better (thank you Gartner)
- Anomaly detection is better than IDS/IPS
- the firewall is dead
- the perimeter is dead
- SSL are the best VPN's
- stateful inspection is better than application proxies
- deep packet inspection is better than application proxies
- application proxies are better than stateful inspection, packet filters, and deep packet inspection (What? You missed the resurrection of proxies by Gartner?)
And now you need to be so paranoid that your users' every key stroke needs to be monitored and analyzed for intent (yeah, that works well), to the degree that you must come up with "termination plans"? Oh and, by the way, we just happen to have this nice product that'll automate this process and make your life much easier.
A much better approach would be to have a realistic security policy and to use the tools you already have, especially the one behind your eyeballs. Most "insider threat" incidents are considered corporate embarrassments not because the incident occurred but rather because they weren't detected until after the fact. The majority of insider abuse is readily apparent, either in the virtual world (in log files) or out in the real world (people tend to talk about what so-and-so is getting away with).
Attempting to totally automate the process, in either the virtual or real worlds, is just a way of abstracting yourself further away from the problem. Network monitoring and management of people have at least one thing in common, they "automate" poorly in that an automated process can only handle "known" issues. Unique issues can always crash automated processes. (It's why we have web-based time sheets but still have entire HR departments.)
You want to properly deal with the "insider threat"? It's easy. Show "trust" in your users. It's okay to "verify" but a certain degree of monitoring but it has to be at a level that your users are comfortable with.
Also, use the tools that you already have. Automated log file reduction is fine, but you still need human review of the remaining entries.
The firewall, the IDS, and security boundaries are still valuable. So's enforceable policies, deep packet inspection, stateful firewalls, and anomaly analysis. They each have their place in your toolset.
Companies such as Gartner like to bank on the fact that you've forgotten that none of these technologies are mutually exclusive. While "layered defenses" may be an offensive term to some, the existence of multiple protections which co-support an overall security policy is still a good idea. Just don't take the human factor out of it.
I've got news for you: If you run a totalitarian environment (AKA micro-manged, micro-monitored), every single one of your users will be evil and you'll end up wondering why your organization has such a high turn-over rate.
Save your cash. Also, keep in mind that the less flexible a system is (the degree of tolerance it has), the more brittle it is and the more spectacular the failure will be when it does go. This goes for machine systems as well as for people.