Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security The Internet News

Web App Scanners Miss Half of Vulnerabilities 68

seek3r sends news of a recent test of six web application security scanning products, in which the scanners missed an average of 49% of the vulnerabilities known to be on the test sites. Here is a PDF of the report. The irony is that the test pitted each scanner against the public test files of all the scanners. This reader adds, "Is it any wonder that being PCI compliant is meaningless from a security point of view? You can perform a Web app scan, check the box on your PCI audit, and still have the security posture of Swiss cheese on your Web app!" "NTOSpider found over twice as many vulnerabilities as the average competitor having a 94% accuracy rating, with Hailstorm having the second best rating of 62%, but only after extensive training by an expert. Appscan had the second best 'Point and Shoot' rating of 55% and the rest averaged 39%."
This discussion has been archived. No new comments can be posted.

Web App Scanners Miss Half of Vulnerabilities

Comments Filter:
  • by JWSmythe ( 446288 ) <jwsmytheNO@SPAMjwsmythe.com> on Saturday February 06, 2010 @05:52PM (#31048166) Homepage Journal

        From what I recall doing this for sites that handled credit card processing (me being in the tested side), those tests are pretty much worthless.

        If you had 1 vulnerability, you'd get pages of false positives or irrelevant information. I recall a particular 10 page report we got back that we were advised to fix or we'd fail on. The only item to fix was the version of the web server was just one behind current. The changelog indicated that it was to fix a vulnerability on a different platform, so it was completely unrelated to us. We'd frequently have points marked off because we couldn't be pinged or portscanned. I'd have to open the firewall up to them, just to be scanned. Our security would identify an attempted port scan as a hostile action, and react by dropping all traffic from them. Sorry my security stopped your scanning, but that's the intention of it. {sigh}

        After opening the firewall to them, and changing the version number on the web server (there were reasons we couldn't do the trivial upgrade), we passed with flying colors.

        For them, they were interested in the version numbers handed off by the server, not what they actually were. For example, if it was Apache, we could have it report Apache version 9.9.9, and that would have made us pass on that part without fail for years.

  • by julesh ( 229690 ) on Saturday February 06, 2010 @06:10PM (#31048258)

    Where's that quote from? I can't find it on either the page or in the PDF...

    It's the submitter's opinion. And it's quite accurate: no such standardized set of requirements can guarantee security, because security is much more complicated than the simple kinds of rules that you can include in them. PCI compliance gives the illusion of security where it may well not actually exist at all.

  • by Hero Zzyzzx ( 525153 ) <dan&geekuprising,com> on Saturday February 06, 2010 @07:20PM (#31048722) Homepage

    My favorite from a past employer - one of these PCI scanning companies asked us to take down our iptables rules for a set time period while they scanned us. That's right, they wanted us to be less secure while they checked how secure we were.

    We were eventually able to get an ip range from them, but not until we fought them a bit. They *would not* do the scan unless we took down our firewall. I wanted to just REJECT everything but 80 and 443 and not tell them, but the higher-ups told me to play along.

    Anyway - the whole idea felt really ... wrong. And they didn't point out anything useful, either.

  • by francium de neobie ( 590783 ) on Saturday February 06, 2010 @08:51PM (#31049328)

    Basically everything about the web today is just one dirty hack upon another bunch of dirty hacks. SSL and TLS are a good example. JavaScript is another. Everything built on top of JavaScript, such as AJAX, is a huge hack. So it's no wonder that it's so damn easy to write insecure web apps.

    <Sarcasm>
    Basically everything about the Internet today is just one dirty hack upon another bunch of dirty hacks. Ethernet and IP are good examples. TCP is another. Everything that does not limit itself inside a single OSI layer, such as PPPoE, all kinds of VPN and NAT, are huge hacks. So it's no wonder it's so damn easy to exploit remote machines over the Internet.

    ...

    We need to throw it all away. Everyone routinely do this with their plastic bags. We now need to extend that practice to all Internet protocols. We need to start again. But will we? Probably not, and that's quite unfortunate.
    </Sarcasm>

    No. The reality for our current computing technology is, for anything non-trivial, there's most probably no complete mathematical proof that the system is perfectly secure, and thus there's most probably at least one exploit to break any non-trivial system. Even the most basic of protocols like TCP have been shown to have numerous flaws over the years - most of those specific to implementation details or things that aren't clearly defined in the protocol itself. Even if your theory and system design is perfect, you can still have plenty of errors in the implementation. You build a system that has 1 million lines of code, it only takes 1 line of mistake for someone to exploit it. That one line of mistake can happen in any programming language and any computing environment no matter how rigorous it is. All it takes is 3 seconds of carelessness in the few years you took to implement the system.

    Making the system architecture simpler can and do reduce the number of vulnerabilities - although it does not eliminate them. However, throwing it all away is usually not a good idea, unless the current solution is shown to be totally unworkable. The thing with throwing away and starting from scratch is, you're throwing away all the previous fixes embedded in the previous system, and humans are remarkably bad at making sure ALL the previous mistakes do not happen in the redesigned system. Remember, from the tiniest integer being read from the database, to the most grandiose thing you see on the UI, it only takes one single mistake in the billions of operations between your client and the server for a security vulnerability to happen. If you think you can write code for 3 years in professional capacity and no single mistake, no one single typo ever happens in your code - fine. But I don't think I've ever seen such a person before.

The one day you'd sell your soul for something, souls are a glut.

Working...