Home > Articles > Automated Penetration Testing – False Sense of Security

Automated Penetration Testing – False Sense of Security

August 6th, 2004

The security industry has matured quickly over the past few years with penetration testing becoming one of the norms for organisations adopting best-practice processes. Loosely defined as the process of actively assessing an organisations security measures and completely reliant on consultancy services, security manufacturers have been eager to bridge the gap between product and service and more importantly to reap the benefits of additional profits. Not surprisingly, we have seen the emergence of the automated penetration test with a number of providers springing up to fill the sector.

The main advantages cited by these providers are that they are faster and significantly cheaper than traditional security assessments performed by consultants using a range of tools. With such promises, it has been little wonder that the security industry has seen a new trend evolving and a movement away from the traditional approach to the automated one has become apparent. However, although the benefits sound reasonable enough it is arguable that in fact those organisations pursing this fashion have actually acquired a solution that provides only part of the penetration testing process; they have in truth bought into a false sense of security.

In these times of limited budgets and cost constraints, anything that reduces outlay has been welcomed, but obviously only if it´s actually fulfilling the requirement. So when considering the merits of both automated and traditional penetration testing, organisations must begin by considering the range of activities available via either approach.

These days, penetration testing (or more accurately, security assessment) covers a range of activities, with the full spectrum of prior knowledge (white-box), from none to complete and all the combinations in-between. A thorough security assessment also includes elements of architectural review, security policy, firewall rulebase analysis, application testing, and general benchmarking against industry and manufacturer best practise. This will result in a comprehensive report that is tailored to the specific requirements of the organisation that has commissioned the project.

Automated testing on the other hand, is generally limited to external penetration testing using the black-box approach (although some providers are introducing an internal testing option). Unlike the non-automated tests involving a consultant, this method does not allow an organisation to profit in full from the exercise. As an automated process there exists no scope for any of the policy or architectural elements in the project, which means that for organisations seeking to complete the exercise, budgets will still need to be dipped into and most likely third party expertise purchased.

Martin O´Neal, Technical Director at Corsaire Limited supports this view, giving a recent example to highlight the point. “We conducted an external penetration test on an organisation that was using two physically separate firewalls for additional security; they were both CheckPoint FW-1. Although a common configuration, as it was the architecture had a potential issue that could have impacted functionality.

As the two firewalls were of the same brand and on the same OS the benefit of using two firewalls for added security had been defeated. The primary reason for using two firewalls is to ensure that if a flaw is found in one, then the same flaw cannot be used to compromise the second one (gaining access to the internal infrastructure beyond).

To gain the maximum benefit from the added administrative complexity of having two firewall hosts, one of the firewalls needed to be replaced with a firewall product from another vendor; a recommendation a consultant performing a traditional penetration test would have picked up, but one that an automated test would have not.”

Automated penetration tests have also typically been sold on the promise of behaving “like a hacker.” This really is a marketing gaff and quite seriously the last thing any organisation would desire from a security assessment. Hackers and security consultants have entirely different goals from the vulnerability scanning exercise; the hacker needs only to find a single flaw through which to launch their attack, whilst a security consultant must be able to identify all of the potential routes through which a hacker could arrive. The method a hacker uses usually involves hunting for a few known services and then trying to exploit them once discovered, whilst the security consultant will check for all possible services, documenting every test performed (and their outcomes). Furthermore, a hacker does not tell you how to mitigate against an attack if a way in has been found, whereas a consultant does. Quite clearly “like a hacker” is what you really don´t want.

Further to the salesman´s pitch is the mention of additional performance, the benefit of increased speed. But how is it possible to run the same number of tests on the network given that in theory the same amount of traffic is generated and the same amount of bandwidth consumed? The answer lies in taking one of two routes: either sending the same traffic in a shorter period of time or generating less traffic volume.

The first route where the same traffic is sent in a shorter period of time obviously increases the bandwidth required. If the systems under scrutiny are production systems, then there is undoubtedly an increased risk that the reliability of the systems may be impacted. Additionally, if the network becomes congested, packets may be discarded, causing the possible generation of false negative results and an exposure to a vulnerability that the testing did not highlight. Inevitably as this demands more investigation from the IT department this defeats the purpose of the automated penetration test itself. Often organisations are left feeling short changed, concluding that it is not only simpler but also cheaper to use the freely available vulnerability scanning tools currently in the market.

The second route (and incidentally the one that most of the automated penetration tests have opted for) where less traffic volume is generated has meant one thing; fewer tests are performed. A quick glance through their marketing material will find references to artificial intelligence and inference engines, which will adapt their testing process by streamlining the number of vulnerability checks that will be performed on an organisations hosts. By first identifying which products are installed on an organisations host, the automated process is able to ignore any irrelevant checks, so improving performance.

This all sounds reasonable, as long as the scanning systems can actually identify your products with 100% accuracy. However, when you stop to consider that the only way that products are detected is via protocol cribs (such as timing and formatting) then it doesn´t look so good. In fact, these cribs are the very things that system administrators are advised to hide or obfuscate as part of security best practise.

So if the only way for organisations to achieve the thoroughness required is by taking time to check for all of the potential flaws how does the automated processes fair when hosts are incorrectly identified and systems left untested or tested for irrelevant flaws? The answer is they don´t; the only real way is through human intervention and by conducting a penetration test with a security consultant.

Evidently when referring to penetration testing, performance does have a benefit but not to the customer. By reducing the traffic volume required for each test, the automated penetration-testing providers have manipulated their customers into reducing their bandwidth requirements in order to service more customers concurrently for the same overhead structure. Whether this has been a smart move is debateable?

Whilst all of this damns the automated penetration test, it would be unfair to conclude that it did not have a useful place in the industry. The problem with the automated test is that it has been positioned as an automated substitute for penetration testing, not as an effective vulnerability scanner, which it is. A point that has regrettably been missed is that the automated penetration test is only one part of the jigsaw, existing merely to support the consultants performing the test itself.

To conclude, for those organisations with insufficient budget to conduct the exercise thoroughly, automated testing might have some worth (being preferable to no testing at all) but for any organisation that genuinely seeks to assess their security exposure, it provides at best only part of the process.

A non-automated security assessment will always be more flexible to an organisation´s requirements, more cost effective as it will take into account other areas such as security architecture and policy, and will most likely be more thorough and therefore secure. Moreover, testing will be regularly performed and by consultants who are able to explain to the organisations management and technical audiences what they found, the processes they undertook and the repercussions of all of the recommendations. Furthermore they will be able to convey in person, as an independent party helping to support the IT security department in gaining budgets so often needed.

Although on the face of it an automated penetration test might appear less costly, if it is examined more closely it is plain to see that the true value of conducting a penetration test still lies in the realms of the traditional approach, the consultant. Organisations must begin to question the merit in the automated penetration test; if in-house expertise exists, consider training in the freely available tools; or alternatively outsource the exercise in its entirety to specialists using traditional approaches.

Andrew Rose, Senior Security Consultant at one of London´s top international legal firms, Allen & Overy puts it all into perspective. “The best way to achieve real value out of a penetration test is to use external consultants to baseline the infrastructure whilst ensuring that knowledge transfer takes place to the organisation´s internal security team. With educated staff many of the more obvious mistakes can be avoided plus the organisation is able to reduce costs and gain the flexibility required to assess their security at a moments notice.”

He continues, “Overall, automated penetration testing has a place in the armoury of security assurance. It has the ability to provide a firm with a quick ´sense check´ to show that they are on the right path. However, it cannot yet replace a fully trained consultant with the right tools, and to think it can, would be reckless.

An automated test appears to be a little like an overnight security guard who is too keen on following stated process. Each time he does his rounds he checks that the same doors and windows are locked and secure but, as it´s not on his list, he walks right past the large hole in the wall.”

Articles

Comments are closed.