Security Through Obscurity A GOOD Thing? 329
twrayinma writes: "In this story Marcus Ranum, CTO for Network Flight Recorder, claims that "Full disclosure is creating armies and armies of script kiddies" and that "grey hat" hackers aren't really interested in better security."
Re:That's funny (Score:2)
Wouldn't you find it more hypocritical if Slashdot, the NEWS site, decided not to post this story that they knew their readers would be interested in BECAUSE they obscured their source to maintain security, and people would call them hypocrites?
Exploits for Dummies (Score:2)
It seems like the issue he _should_ have been focusing on is the problem of a clever minority creating easy-to-use pre-packaged cracking tools, thus empowering masses of dumb angry kids who would otherwise be completely harmless.
Clearly one motivation for widely distributing a cracking tool that any idiot could use would be to force an unresponsive & incompetent software company to fix obvious dangerous problems that they might otherwise continue to ignore. This could be seen as legitimate activism.
While a penetration tool that is easy to use facilitates legitimate security auditing, it seems reasonable to question the judgement of lowering the threshold of competence required to wage an effective attack...
I say, Release the hounds! (Score:2)
It was a security issue (DoS). It was obscure (MS sure as hell didn't tell me). I got nailed. Security through obscurity failed in this particular instance. It would be interesting to do a comparison of various exploits to see how they work out, rather than us all shouting opinions, ambiguous logic, and, in my case, lousy anecdotal evidence.
Script kiddie competence level (Score:2)
Then some 5kr1p7 k1dd13 stumbles on an exploit and suddenly the bulk of the world's computer users is vulnerable.
I think you're rather overestimating the typical level of skill and competence possessed by the average script kiddie.
Most anecdotal evidence (and certainly my server logs) point to the fact that most attacks consist of running whatever tools they have over whatever hosts they 'like the look of'. If nothing cracks, they move on.
I don't honestly think that, if the tools were to dry up, these same kiddies would actually bother to learn about the theory and practice of security. I'd bet that most of them don't even understand how TCP/IP works, or know how to program beyond a trivial level in C.
Cheers, Nick.
Re:Bah.. (Score:2)
Just looking at the code isn't good enough either: you have to understand it before you can start seeing security problems. Only a very small percentage of users of most pieces of software are going to bother examining the code in depth. Imagine how many users of a product there would be if you had 1 million people (2 million eyes) actually looking at the source and understanding it sufficiently to spot problems.
Re:He's missing the point. (Score:2)
I also doubt your socially conscious kiddies do many public education activities or letter writing campaigns before causing damage.
We need full disclosure!! (Score:2)
One of the first things I do every morning is check the security sites to see what bugs may have poped up. Then I check the versions against the versions we have installed. Then I take action, either to replace, patch, whatever it takes.
Yes script kiddies give me headaches, but I would rather put up with them than to have my systems cracked and not even know it, or be able to track the problem down.
We were hit a while back by the DNS DOS attack, somehow I missed that report. But I was very happy to find the fix for the problem when I finally traced down what the symtoms were. Without full disclosure, I would still be getting hit with it. Duh!
Full disclosure is a two edge sword, it can cut either way, but I would rather have it.
********
flipside (Score:3)
Consider the Alternative to Full Disclosure (Score:4)
Frankly, partial or non-disclosure keeps the information from the people who really need it. Academics need the information to keep up with and understand what a vulnerability really is. Things like CERT [cert.org] advisories are useless for this. They don't have the information needed to figure out what the vulnerability really is and how to classify it. Another group hurt by partial or non-disclosure is sysadmins. If a sysadmin scans bugtraq even weekly, he can often have a patch or workaround for a vulnerability in his systems long before the vendor releases anything. Open source really rules here where there are usually alternatives such as fixing the code or getting a different free package put up instead.
Even if there exists some cabal of fully informed individuals, they are always going to leave out many of the folks that need the info. Face it, most vulnerability information is useless without enough info to exploit it.
Slashdot mislabelling Article (Score:2)
What I read is that he thinks it is a bad plan for people who find vulnerabilities in software to release no-brains tools to exploit them and to do it because it is profitable to them.
He didn't say, "Don't tell everyone about the security problem"; He said tell the appropriate people first, don't do it for your own gain, and finally don't put up a website with a set of tools to exploit the vunerability that script kiddies can use.
Why didn't slashdot label this right is the bigger question? Is slashdot being run by script kiddies?
Definately not cut & dried... (Score:3)
I am a subscriber to bugtraq (isn't everyone?), and typically, when a vulnerability is found, one of three things happens:
You'll notice a common element in my list: All of them contain the phrase "working exploit". Many, many of the "I found this bug" postings to bugtraq contain a fully functional script to demonstrate the problem -- A remote root exploit includes a script to (yes, that's right!) give you root on a box, remotely. All a cracker really needs to do is subscribe to bugtraq and wait, and the tools he needs to do his job show up in his lap. Sometimes, these are tools and exploits already found "in the wild," but just as often, they are not.
This, in particular, I have a problem with. In the vast majority of cases, it is possible to explain and demonstrate a security bug without having to ever make an exploit that actually works. One author, recently, posted a "proof of concept" exploit that required, among other things, a good working knowledge of PPC assembly to actually turn into an exploit. He demonstrated the security problem quite well, without giving "script kiddies" a tool they could use to break systems.
Now, granted, there are plenty of people who can take information about a vulnerability, and turn it into working code, and distribute it. These are the real hackers amongst the cracker crowd. But I don't think we need to be making the script kiddies' jobs easier by handing them working exploits on a silver platter.
Then again, these same "real hackers" are perfectly capable of finding these bugs on their own, so hiding an exploit from them (working or non) doesn't really gain you all that much.
I think that, overall, full disclosure is a very important thing -- That's "full disclosure" as in "give everyone the information they need to identify, demonstrate (if feasable), and fix security problems", not full disclosure as in "give away the farm by posting perfectly functional exploit code before you even tell the vendor". Disclosure of their dirty laundry to the world has goaded a number of vendors into fixing long-standing problems with their software. Without forums like Bugtraq, these problems would persist, with only the bad guys knowing anything about them.
The other advantage that full disclosure gives is the ability to discuss and learn from the mistakes of others. For example, there is currently a discussion happening on Bugtraq reguarding user-supplied (or otherwise variable) format strings for *printf-style commands and how they can be abused to give visibility into a (privileged or otherwise) process. Though a true solution may never be reached, I've seen more discussion on the topic in the past few days than I've seen on that topic in the entirety of the rest of my life, and that can't be bad. Discussions of this type pop up from time-to-time on bugtraq, and I'd dare say that anyone who cares to listen to them can find themselves writing more secure code very quickly.
Of course, there's also the downer: Most of the issues I see discussed on bugtraq nowadays are the same types of problems ... that I saw discussed on bugtraq 5 years ago ... Which are the same issues as those brought up by the Morris worm more than 10 years ago. Pity that we'll never learn. *sigh*
Bah.. (Score:5)
And then you have companies like Microsoft, who when notified of an exploit by say, USSR Labs, on June 11th, don't get a fix out, and instead wait until it goes public, and then say "we'll have a fix out this afternoon!"
The only way to get some things fixed is kick companies in the ass, and making holes public is a great way of doing it.
He's missing the point. (Score:4)
Nader had to wait years for Congress to pass laws forcing the auto industry to tighten up. I think hackers are a bit more effective. They're forcing companies to tighten up at "Internet speed".
---------------------------------
Re:Yes, source code for exploits should be release (Score:3)
Well, if there is no manufacturer (Linux, for example) you really have to post your code to the kernel mailing list, which is publicly available. I agree that ready-to-go exploit code isn't very ethical to provide, though. There's a difference between a five-line code snippet that demonstrates the problem, and a nice GUI client that anyone could use to unleash an attack. Of course, some admins would use the nice GUI tool to test their own networks, but there's a limit to how far you can extend that justification.
...and it's flawed, too. (Score:2)
They might not be too happy with Dick, but their house is now secure.
Better REAL security than false illusions of security.
Security is a process, not a state (Score:2)
I believe the solution here is to create a standard methodology for this kind of stuff, which would go something like this:
1) Exploit is discovered and announced in very generic terms. No tools or detailed exploit instructions are released. This could be an "announcement" on bugtraq.
2) 30-day clock starts ticking. Release the tool to the vendor but no one else.
3) If at end of 30 days the vendor has not provided an effective patch, release the tool and detailed exploit info.
4) If the vendor has provided a patch, don't release the tool. At all. Ever.
The mystery here is why people listen to Ranum... (Score:5)
No wonder he's not fond of "gray hat arms dealers".
Of course, nothing he is saying is backed up by any real researchers. In cryptography, cryptanalysis is a foundation upon which theory is built. Analyzing and breaking algorithms is the respected, hard task. People like Bruce Schneier repeatedly publish papers disclosing flaws not only in cryptographic algorithms, but in protocols that use them!
MJR's nonsensical position is even more amusing based on the people he consorts with and praises. NFR went through much effort to publically associate themselves with the L0pht --- probably the most well-known active source of full-disclosure security information. He also sticks up for people like Dan Farmer and Weitse Venema, both of whom have published information and tools about new security flaws.
The message here is not that "full disclosure is evil". What Marcus longs for are the olden days of private security mailing lists, where only his friends got information about security flaws. Those were also the days in which literally every piece of software was riddled with stack overflows and the most common way of breaking into remote computers was by mounting public NFS shares.
I understand why MJR doesn't like people outside of his insular little clique publishing and discussing security information. But it would be silly to pretend that anything he says is motivated by a desire to secure the Internet.
Re:Well, of course. (Score:2)
The ideal response would be for the company to publicly announce a recall due to security probles with the lock (just as car manufacturers do with recalls). They would repair/replace free of charge.
However, they wouldn't give out explicit details on how to exploit this problem. That would be silly, and would obviously encourage the less scrupulous types to take advantage of it.
Thus, you have public exposure, a fix, and no unnecessary information given out to the baddies.
The real issue is what to do regarding companies that ignore security problems, even when brought to their attention.
Re:Why Script 'Kiddies'? (Score:4)
If you really want to know where "script-kiddy" comes from, just look this line from your own post: "But I'm sick of all this childish behavior...". That's exactly it. We call them "kiddies" because their behavior is childish. Immaturity below their years.
You, being responsible, are not a kid. You are a young adult. And yeah, it sucks that the you're treated like crap by know-nothing adults because of idiots in your age-group. But unfortunately, what we call those idiots isn't going to change that. The only thing that will change that will be to educate those know-nothings that are unfortunately in charge of the stuff they know nothing about in too many places.
Now if only I knew how to do that...
Re:Some of the things that need to be done... (Score:2)
Give Grey Hats the Right Incentives (Score:5)
In otherwords, be businessmen.
It appears the corporate establishment types are so concerned about real money going into the hands of young guys with an attitude that they would rather subject the Internet community to unnecessary risks, and their stockholders to violations of their fiduciary trust than pay the grey hats what they are worth.
For example, Dan Brumleve, the developer of DBarter [dbarter.com] (which won the Hackers Conference prize for "best work in progress" last year [dbarter.com]) was quite young when he discovered his first Netscape exploit Tracker [netscape.com]. Netscape subsequently gave credit for finding the "Tracker" hole to a guy from Bell Labs. Their excuse for doing this was that they already knew about the Tracker exploit, having been told of it by Bell Labs -- an act that might have been rational if the Bell Labs exploit had been the one posted to Dan's web site. The problem was, Dan's exploit still functioned under the Netscape's fix to the Bell Labs exploit.
Dan has documented the behavior of corporate establishment types in this fiasco [shout.net].
Inspired by such wisdom from corporate establishment wisdom, Dan went on to discover and publish other exploits [shout.net].
At no time was Dan offered more money by Netscape than he was making as an independent contractor hacking Perl scripts for e-commerce web sites, although Dan did ask for such compensation.
Each time Dan published one of his exploits, Netscape stock went down 5%, and some of Dans friends made some money shorting Netscape on advanced knowledge of these exploits before Netscape was finally bought out by AOL.
OK, Dan's exploits may not have caused the Netscape stock price drops (though, try telling that to the guys who made money assuming they did). But even so, this attitude toward grey hats, that controlling them by legislating against them, is going to drive them underground. Society has "punkified" a lot of these young men already so threatening them with prisoner gang rape isn't going to twist their heads around that much -- aside from being a morally reprehensible, not to mention unconstitutional [geocities.com], way of dealing with any problem.
Mafia (Score:2)
Oh c'mon... (Score:2)
I don't think that causing fatal auto accidents is the real world equivalent of crashing some corporation's webserver. It's more like keying a car or something. Yes, script kiddies are annoying and they can cause quite a bit of damage, but let's try to keep a little perspective on it instead of equating them with murderers, ok?
Script kiddies and non-profit web sites (Score:2)
No, the problem is sites like my amazing.com or Kiro5hin, which are run by an amateur or amateurs and simply don't have the resources to track down every security patch and close every hole.
I was blasted off the net for over a month because of one security hole in the Cobalt Raq server I bought. I had the money to buy the server, but not the time to keep it safe.
Kiro5hin had a staff, but when confronted with the type of attack Yahoo's sysadmins get paid big money to guard against, they weren't able to help. Part of the problem was that they were unpaid volenteers, and I'm sure they - like me - had a boss yelling at them for not doing the work they were hired to do while attempting feverishly to deal with the problem.
This is why the script kiddie is such a big problem in my mind; it threatens what's left of the non-profit, more or less communal/genteel part of the net. I'm as much of a capitalist as anyone, but I still have a special spot in my mind for this kind of thing.
I am all for a policy that avoids any disclosure of security holes. I don't even care if my machine is secure; everything on it is public anyway. I don't want to have to care if my machine is secure, and neither should anyone else who sets up a volenteer or individually-run site for the joy of sharing interesting stuff with the world.
Unfortunately, revenge on the script kiddie, sweet as it might be, takes resources. Yahoo has 'em and can catch 'em if it wants to; I simply don't have the time and don't want to take the effort. So my machine is vunerable, and I don't know what to do about it other than shutting the whole thing down.
A very depressing choice, let me tell you.
D
----
Re:Why Script 'Kiddies'? (Score:2)
I wouldn't exactly say that. Teenagers spray paint and blow up mailboxes. By definition, they are immature. It just so happens that technology has empowered a lot of them to think that by performing mischief on the internet they are kewl haxxors. I'd say the above poster was mature *beyond* his age, as many smart and often techno-savvy kids are. But most his age are still tipping cows and giving wedgies.
Re:Bah.. (Score:2)
And why in hell should he be interested in helping you? And what do you care about his agenda?
He is doing you a service: he is telling you that a program you have is vulnerable. It's up to you to decide what to do with this information (standard reaction: ignore), but the guy who published the exploit owes you nothing and gives you useful information.
Kaa
Bah! Kids today have it good! (Score:2)
Bah!!
Kids today have it easy because us late 20-something GenXers suffered for the cause. And now hacking is tres chic these days.
<voice="old-man">
Why, when I was kid, from 6th to 11th grad (that would be circa 1983-1988) I was routinely beat-up and harrassed because I spent my recess periods--and countless hours after school--programming the schools commodore Pet. Ok, I guess I deserved it because I was skinny and wore argyle sox all of the time, and maybe I should have done book reports on other things besides Ada Lovelace, ENIAC and the transistor... but hey...
</voice>
Now it's ultrahip to be a hacker. Look at the exponential growth at DefCon (had to get a hotel in Barstow!!!)
Posers vs. Punks vs. Script Kiddes
Script kiddies are the equivalent of some poser who has 50 bucks and few hours to kill, and thus become instantly hardcore by getting a tribal tattoo around his/her bicept... or maybe even a tongue piercing and bleached hair. Ooo. Bad-ass. If punks hadn't been invented this culture in '75-85, and then pop culture hadn't made punk culture a commodity, nerdy suburban kids would still be wearing Izods and ProKeds.
I hate being introduced by my friends as a hacker. And it's my fault, I should just lie when friends call me to go out on weekends and I'm too busy optimizing assembly code. I think the best thing real hackers can do is to help devolve the image of hackers back to being booger-eating social-skill-lacking losers so that we can have the quiet solitude of our underground handed back to us.
</rant>
---
Re:Publicly Announcing Bugs (Score:2)
Re:Yes, source code for exploits should be release (Score:2)
My problem is with the pre-written, ready to make/execute "demo" code. And if people won't believe that an exploit exists, send a copy to CERT or something, don't post it to USENET...
Yes, the information has to get out, but don't hand a gun to everyone to show them that your bulletproof vest has a hole in it...
Re:Some of the things that need to be done... (Score:2)
Exactly. Enough information should be given so that a security expert can find and protect/fix the hole, but code to exploit it should not be handed out. Without someone handing them the code, most script kiddies would be dead in the water. If they're smart enought to figure out how to write and exploit, then they're probably smart enough not to use it for evil.
Will this stop script kiddies? No, someone will make them a script at some point, but hopefully, it will slow them down and give us a little more time to patch the holes.
All this IMHO, of course.
On the other hand, how bad is it that the scripts are out there? As far as I can tell, a lot of sites don't start locking their doors until someone has come in through them. If all script kiddies dropped off the face of the earth today, security would probably go to hell. Would you feel more secure if the sites that kept your credit card info (as an example) didn't have to constantly worry about plugging every little hole that a script kiddie is going to use?
The internet is a hostle place. We are just going to have to adjust to that. Script kiddies and black hats are not going to go away. ever. All the whining and finger pointing in the world is not going to increase security.
If you can't take the heat, stay off the net.
Re:Yes, source code for exploits should be release (Score:2)
Re:He's missing the point. (Score:2)
coding abilities (see this site [enteract.com] for what I mean).
But the reason they CAN do it is because of shoddy design or implementation of software by designers and sysadmins. I would rather have everyone know about a buffer overflow problem in sendmail or a DNS exploit than only the black hats. Sometimes sysadmins and designers aren't aware of problems and a "grey hat" who creates a cut-and-paste exploit program makes them aware rather quick. This give impetous to fix it.
For instance, if I found out that every 3rd key in my town could open my back door, I would be concerned. I might have to fix that someday, in case anyone finds out. If I found out that someone published this information in the local paper and was giving away a machine to cut those keys, I would have my locks changed NOW.
And I would test my lock better.
And I would demand a new lock design so this could not happen again.
And I would make sure, possibly by lawsuit, that the lock maker doesn't continue selling that particular lock.
How long do you think MS or some engineer that worked for them knew something like Melissa or ILOVEYOU was possible but didn't bother to fix it until it happened. How long were the other holes around and used by black hats before they were uncoved and "published" by the grey hats?
Makes you wonder...
Re:That's funny (Score:2)
Now, repeat after me, slowly, until you hear a loud 'ding' (the sound of CLUE) -- Source code is not a right. If I don't like it, I will write my own and GPL it.
The Slashdot Sig Virus
Hey, baby, infect me [3rdrockwarez.com]!
security is still lousy; disclosure is necessary (Score:2)
Without disclosure and actual attacks, security is not an economic incentive. It can't be demonstrated to customers, and it doesn't become a product differentiator. The occasional problem that might happen in a world without disclosure are not sufficient to affect the bottom line, in particular since software vendors are usually not liable.
We tried hushing up security holes for decades and it didn't work. In fact, it gave us the Morris worm--the exploits it used had all been well known for years without any vendor bothering to fix them--and VMS pseudo-security. Arguably, the current sorry state of computer security is still the aftermath of that approach.
I see nothing problematic about disclosure of security problems, whether by competitors or anybody else: it's the only policy that objectively makes sense from the point of view of the customers in order to create a market that supports secure products. "Script kiddies", far from being an annoying side effect, are the very mechanism that makes disclosure effective: without the economic threat from their activities, vendors would still have no incentive to fix their security problems.
In the long run, I hope that these kinds of economic pressures will get rid of the snake oil and tinkering around the edges (and that includes Ranum's own intrusion detection systems) and will force companies and developers to adopt tools, methodologies, languages, and cryptographic approaches that address security problems correctly. (Yes, this also means Linux needs to change.)
The canonical example of this. (Score:2)
A very amusing example of this is buried amidst the Jargon File pages:
http://www.tuxedo.org/~esr/jargon/html/The-Meanin
Regrettably, "killall" would probably stop this hack in its tracks now, but it's still very amusing reading.
Re:Script Kiddies Considered Helpful (Score:2)
And I guess shooting you is OK so it forces you to Get A Clue and wear a bullet proof jacket?
Crime is crime, and no excuse in the form of "it's only done to show that you haven't spend countless hours and gazillions of bucks securing yourself". It doesn't hold in court for theft, rape or murder. Why on earth should computer crime be an exception?
-- Abigail
Re:He's missing the point. (Score:2)
Let's see. You know about a problem in buffer overflow problem in sendmail. You have two options: public disclosure, or no public disclosure. According to you, there are two outcomes: everyone knows, or only the black hats know. Since with public disclosure, everyone knows, the no public disclosure leads to only black hats knowing. Ergo, you must be a black hat, and even more, anyone you tell it to has to be a black hat too. Including (gasp) the people maintaining sendmail and CERT. (Or perhaps you wouldn't tell them and willingly restrict yourself to black hats.)
Of course, if you aren't a black hat, your reasoning must be flawed. (And I think it is. There *is* something between everyone and only the black hats. But I leave that to you to figure out.)
How long do you think MS or some engineer that worked for them knew something like Melissa or ILOVEYOU was possible but didn't bother to fix it until it happened.
Ah, I guess you believe that sendmail is maintained by people using the same policies as MS. Just like everyone else...
-- Abigail
Either go public or they won't fix it. (Score:5)
I had to write an article in a (german) computer magazine under pseudonym, then take that article to the local vendors office and say "Look, now it is even in the papers" in order to get a reaction from then. IBM didn't care a shit about security back then, unless they were forced to by publicity.
This has thorougly changed now, but only due to full disclosure.
And even now you need disclosure and publicity to get people to get their act together. A large german online bookshop had their server wide open for nine months after I informed them that I was able to connect to their Oracle on their webserver using my Oracle installation, and get all their credit card data. Only after they ended up on in the same german computer magazine they decided to firewall themselves shut.
With open source the situation is better, but only slightly. I was able to break out of safe_mode in PHP 3.0.13 and below using a bug in their popen() implementation, and fixed it in CVS. I then posted the bug on bugtraq, forcing the PHP team to release 3.0.14 with the fix immediately. Nice reaction, but the core team didn't like me publicizing on bugtraq.
When I found a similar bug to break out of safe_mode using the mail() function, I did not create a fix, and did not post on bugtraq, but informed them privately of my findings. The fix went into CVS in under 3 weeks, but 3.0.15 was released only three weeks later.
I find this disappointing: Even in Open Source you get appropriate reaction to security issues only by forcing updates through full disclosure. Well, I for my part have learned my lesson: I find a security related bug, it goes to bugtraq - no delay, no mercy. The waiting ain't worth it.
© Copyright 2000 Kristian Köhntopp
The article fails to give 1 good reason. (Score:2)
'm sorry, but it is easy to lock down a box, but on the same note... if your web server is the same as your accounting database server... you deserve to lose everything when it get's nailed. Your internet equipment has to be outside the protected zone.. if it isn't then your company is just plain stupid.
I can see problems like kuro5hin happening, but no script kiddie is gonna take down ibm.com .
We are currently at a crux of the digital civilzation... we have technology abounds and very few that actually know how to administer it. And we have a large amount of people masquerading as those that know how to administer it (MCSE's, 60% of all the sysadmin's out there, etc..)
In 10 years things will be different... I see more chaos coming, but more effective filters or private "internets" will rise to meet the demands.
If you want to gague the chaotic levels of the internet in general... just spend 1 night as an op in any IRC channel.
Security by desclosure (Score:2)
With Security by obscurity usually a cracker discovers the defect and releases an explote.
We discover the defect after the crackers efforts has resulted in mant victoms.
With security by exposure the defect is descovered and made public. Now it becomes a race. Who will win. The patch or the cracker?
In the case of Linux the patch. The cracker won't bother. His efforts are for the long term not for a short term. A SysAdm will patch his system and relase the patch in a short time. The SysAdm is motivated not only to fix the defect but to make sure it stays fixed (so he dosn't need to fix it on a future update).
In the case of Microsoft it WAS the cracker as Microsoft didn't take it sereously but now they do so it's an even race.
Any time a company dosen't take the issue sereously the cracker will win. The public also wins as this provides a hot poker to anyone who would not take security sereously.
If the company prepetually fails to patch defects the company becomes known for defects and profesionals are discuraged from using the defective products.
It also helps in monopoly cases... to prove a lack of consern for the costummer.
Security by disclosure wins out...
After all.. the consummer can't know if a defect is being ignored if the defect isn't disclosed publicly. If a hacker dosn't expose it sooner or later a cracker will explote it... the exposure just gives the good guys a better chance and the costummer a heads up..
Bogus article (Score:2)
Yeah, right.
The problem remains that systems are still being shipped with security holes you could drive an 18-wheeler through. This is unacceptable behavior on the part of manufacturers.
What we need is product liability for this sort of thing. A few billion-dollar lawsuits will make Microsoft, Red Hat, and VA Linux get their act together.
Re:MS discloses nothing so they must be unhackable (Score:2)
If your assertion is right, one of the biggest stregths of the opensource operationg systems will cease to exist as their market share increases. The fact is, a huge proportion of computer users don't, and never will, keep track of security issues.
So, if/when Linux has an 80% market share, any bugs that are discovered are going to remain unpatched unless there is some sort of automated system (which, as you pointed out, is not necessarily very effective).
The problem seems to be with having lots of non-tech savvy users, not necessarily with open/closed source development.
Bruce Schneier says it very well.. (Score:2)
http://www.counterpane.com/crypto-gram-0002.htm
Re:he has a point - but it's misinterpreted (Score:2)
Amen to that. Script kiddies are just that -- ethically impaired children who might know just enough to install RedHat and launch a script from the commandline after having it explained to them ten or twelve times on some IRC channel. Very, very few of them have the knowledge necessary to build their own tools.
I'm all for full disclosure in much the same way that I am all for well-regulated gun ownership. Keeping the info flowing is one thing, but it's quite another to mass-distribute cracking software to script kiddies, just as there is a difference between licensing adults to own guns and just leaving a case of handguns in a high school locker room.
Ultrix and BSD (Score:2)
That may not have been true for the MIPS release, but the Ultrix at the time was BSD 4.x, I think 4.2...
The myth of many eyes (Score:3)
It's about time somebody stood up to the legions of open source zealots and told them that their cherished view of "many eyes makes bugs shallow" is little more than McCarthy-like jingoism rather than a solid foundation for security.
I'm not saying that obscurity is good for security either mind you, but the fact is that when you have the source code to a product at hand, it becomes a hell of a lot easier to find exploits with a debugger and a bit of patience than it would be with a raw binary. And thanks to the "efforts" of system administrators who would rather spend their time playing Quake than downloading the latest patches and bug-fixes these exploits put thousands of sites that rely on open source software at risk.
The many eyes mantra only applies when many eyes are actually looking at the code. In most cases there are about two people (the programmers) who actually look through the code to fix it, and everyone else is hackers looking for their latest backdoor penetration.
This is an area in which there is so much FUD, from both sides, that a reasoned debate is next to impossible. Until the zealots stop and think, security is going to be something that is argued about rather than realised.
---
Jon E. Erikson
Re:Some of the things that need to be done... (Score:2)
Full disclosure helps, but in some cases is too extreme, does source code for a particular exploit really need to be published? In reality, when an exploit surfaces, it should be publicised, but not in detail. This would give reputable companies time to fix it (presuming the finder gave details to the company and perhaps a handful of reputable security experts who might be able to create a workaround plus IDS fingerprints).
The big question for me is: Who are the reputable Handful? When you limit it to some arbitrary number, decided by whoever finds the bug, you have different gradients of information in the field. THat is, some know, and others don't. You leave it to be a judgement call, and everyone gets screwed over in the process.
Then you said...
DoS'ing, cracking, exploiting, rooting, sniffing should all be classified as illegal, and penalties must be established. Although the cost of tracking down perpetrators is high, the increasing number of these l337 scr1p7 k1dd13s is only going to cause more and more financial loss, especially as the Internet becomes more ingrained in society.
Fine, all well and good as long as you can adequately measure 'Malicious'. All Rooting, sniffing and exploiting is not always malicious. Hell, Security folks who find vulnerabilities would be out of work. The boys and girls who find the bugs in the first place would be incarcerated. (That would at least solve your security-holes-for-the-script-kiddies problem.) Malicious is all dependent on the act and who's view you're looking at. I may scan someone's box without malicious intent, but they may think its terribly intrusive and serving only a sinister plan.
Software Vendors Created Full Disclosure (Score:2)
The first full-disclosure lists were founded by sysadmins who were frustrated at the lack of response by vendors, as a "self-help" resource. Some of them even started out as "partial disclosure" lists, thinking that vendors would wise up and fix their bugs. Did it work? Nope.
Heck, even in this day and age, vendors are still stupid. Every so often, a bug gets posted to BugTraq without an exploit, and the vendor gets all pissed, calls the submitter a liar, and threatens a slander lawsuit. The only way to shut them up? Publish the exploit.
That's when people like Ranum get all pissed because the poor submitter defended his honor after being slammed in a public forum. Well, cry me a river.
The current situation isn't ideal by any means; I think exploits shouldn't be posted except in cases of flagrant irresponsibility or hostility, just as one example. But let's not pretend that it's irresponsible, or even immoral. It's the lesser of two evils.
If Marcus Ranum wants us to stop publicizing cracks, then he had damn well better be ready to deliver a guarantee that vendor response will be better in an age of secrecy. Can I sue him if a vendor sits on a report and doesn't fix it for six months, and a cracker uses it to trash my system?
Until that happens, get used to full disclosure. It isn't going away as long as the USA has a First Amendment.
OTOH... (Score:3)
Re:He's missing the point. (Score:2)
I kind of agree that script kiddies are providing something of a benefit to consumers of software and information services by giving a compelling incentive to software makers and online businesses to pay much more attention to security.
Difference in the analogy is that Nader was intentionally focused on exposing the safety problems of the auto industry. Script kiddies probably mostly don't give a rat's hindquarters about the side effect that occurs from their activity.
Re:Why Script 'Kiddies'? (Score:2)
Wait til they find out i upgraded the ram in a few other ones
A lot of these people understand very little. A friend of mine ran into problems at university because she copied a program used for her course from the university network. They found out, and despite large letters on the about box saying "Freeware - Please Distribute Freely" they told her that "you cant just copy a computer program" and asked her to return the cd.
Not even debateable (Score:2)
We aren't disclosing holes just to the kiddies, we're disclosing it to everybody willing to listen. That means people can know when there's a problem, to which they have every right. Otherwise if one person found this hole, that information will get passed on and on until you get cracked and have no idea why. Security through obscurity is only appealing to lazy sysadmins who don't want to bother with actively keeping their system secure (by visiting bugtraq etc.) and instead want it to be done passively (Microsoft releases a new SP) This is no way to maintain a server and thus no way to look at security.
Re:That's funny (Score:2)
No.
--
Re:What's the distinction between... (Score:2)
A conflict of interests (Score:2)
There is a conflict of interests at work here:
Full disclosure is suboptimal because people have better things to do with their time than upgrade and patch software. No disclosure is suboptimal because there is no pressure on software vendors to fix holes.
Full disclosure works well with stable software. Eventually bugs get fixed and the continuous public review will have created a secure product which can be used for years and years.
It is with rapidly changing software there is a problem. And in these days where "internet time" is a valid excuse for anything, rapidly changing software is what we have lots of. For this, we need a good idea for how to strike a balance between the two extremes, full disclosure and full secrecy.
One idea is to have a "waiting period". When someones discovers a security problem they inform the vendor but wait some about of time before informing the public, say 1 month. That way not only will a fix be ready place when the problem is publicized, but frequent upgraders may already have the patched software running. The software vendor, knowing that the problem will be publicized at the end of the waiting period, still have an incentive to get it fixed.
Of course this doesn't help the masses with years-old software. Someone please get an even better idea!
/A
you people are a big bag of contradictions. (Score:2)
open source this, free software that, I want to see the source or I refuse to use your software.
but then the minute someone starts distributing some dangerous source, holy shit, this is a terrible disservice to the community.
did you ever think that if you read bugtraq and you see a brand spanking new bug that someone discovered and exploited last night - if this bugtraq post contains complete workign source for an exploit and complete instructions on how to use it - then you can turn on your machine, look at the bug, look at the source, and make your own fucking patch? if youre running a machine on the internet and you have the capability to defend it from attacks by fixing source, and someone is GIVING you, for FREE, all the knowledge necessary to fix this bug and assure yourself youre no longer susceptible, its youre responsibility to fix it.
don't bitch that vendors arent fast to respond, dont bitch that these exploits are dangerous and should not be distributed. the fact is, they ARE distibuted, and instead of complaining, you should be really, really obscenely happy that whoever writes these things is nice enough to tell you about them instead of just rooting you over and over.
Why Script 'Kiddies'? (Score:4)
Re:Well, of course. (Score:2)
Obviously, if *you* know you have a problem, you don't advertise it.
But what if the hacker knows you have a problem, and you don't know about it, and can't find out? *NOW* you have a real problem. At least when vulnerabilites are published, you can *expect* that people will try to haxx0r you.
a) Your lock is defective, and
b) only the criminal organization knows that the lock is defective, because someone from the lock company sold them the information.
You are far safer
Re:Some of the things that need to be done... (Score:2)
Documenting the exploit isn't the same as posting ready-to-build-and-run source code. Describe the exploit on a packet-by-packet level, and forward it to Cisco, CERT, and other appropriate security fora.
Once you've given good, solid information that anyone with good knowledge of TCP/IP can understand and verify with a moderate amount of effort, you've done your part. It isn't your job to "prove" anything; if you get told "put-up-or-shut-up" then just move on.
Re:Shifting the Blame (Score:2)
Army of script kiddies... (Score:4)
If the holes weren't published, you wouldn't be alerted to them, and then the only people who knew about them would be people who
Would you rather a giant (but pitifully unskilled) army in front of you, or one very skilled assassin behind you?
he has a point - but it's misinterpreted (Score:5)
Who was cracking Novell's LANManager password scheme - included in Win9x - before l0phtcrack was released? How many DDoS attacks had you heard of before the release of trinoo, etc? What about fragmented IP packets before teardrop?
The real problem with full disclosure is not that holes aren't patched - publicly announced bugs usually do get fixed sooner rather than later. The problem is that users don't always deploy the patches. In the meanwhile, well-meaning (or otherwise) "grey hats" who have coded exploits to holes they discovered - usually in order to enhance their media shebang and sell more of their own security "solutions" - have handed a tool to skript kidz who simply hunt the net until they find a box whose harassed admin hasn't installed the latest patch. Alone, many of these "crackers" couldn't crack a paper bag. With the utilities in their arsenal, it's trivial.
See this related article written by the l0pht:
http://www.l0pht.com/~oblivion/so apbox/index.html [l0pht.com]
I'm all for disclosure of security holes - it keeps vendors honest, and it allows for creative security community solutions. It may not be in the best interests of the world (and info security does have a global impact these days) to code actual *demos* in order to pressure vendors into implementing fixes. Just explain the hole, explain the danger, heck even explain a step-by-step exploit. Just dont code the bitch. Your neighborhood harassed admin will thank you.
-konstant
Yes! We are all individuals! I'm not!
Re:Well, of course. (Score:2)
Instead, I'd go a few blocks over, find the house with a good, expensive lock, and break through the window.
Let's do that Other Industries comparison again.. (Score:4)
Does the car industry bewail him finding that problem with the car? Well.. Correction.. Do the bewail him
Now.. Let's take that one step further.. An
Does the automotive industry scream? Yes, for a little bit.. But they issue a retrofit pretty damn quick. Would they scream if he hadn't told everyone about it? Would they hurry with the refit? Would people trust them, in the future, by default?
In the I/T world, the best approach, with so many faulty packages, is a belt-and-suspenders approach. Layer several 'impenetrable' and 'infallible' packages in such a way that possible weaknesses should be isolated and shielded, then apply careful monitoring. And the
For all these companies, complaining about how a grey hat's article on such-and-such bug ruins the safety of their entire site, I have ZERO pity, because they have obviously made the mistake of placing all their eggs in one basket.
Re:Why provide ready scripts? (Score:2)
A clear, concise description of the flaw and what should be done to work around it or fix it should be given, but in many cases we do not need root exploits released like this.
If it's ever necessary to distribute a tool to determine if you are vulnerable to some bug (in case for whatever reason it's not immediately obvious), the only thing that should be written is a tool that says "yes" or "no". Sure, somebody will be able to look at the code and figure out the nature of the bug, but the point is that the "exploit" itself cannot be instantly used by thousands of script kiddies. If it's necessary to distribute detailed information/code about a vulnerability, at least do so without providing an out-of-the-box exploit to any kid on the 'Net.
Re:Well, of course. (Score:3)
Wow, that was a bad analogy.
The point is more like this -
"How is my house more likely to get broken into?
I have a door with Brand X lock.
1. It's discovered that Brand X locks suck ass. This makes the front page of the paper for some reason. You now have the information to get better locks (if you choose to).
2. It's discovered that Brand X locks suck ass. No one says a word about it, but those doing B&E's soon discover this, and go around caseing all the houses with Brand X locks."
(disregarding for the moment that what kind of lock you have doesn't matter with respect to if your house is going to get broken into or not...)
That's more analogous to the situation here. The 'obscurity' doesn't refer to specific information (passwords, etc - in the lock's case, the specific makeup of your key), but to the information pertaining to the general workings of the security system (i.e. in the lock, how the tumblers work - how easy it would be to pick, etc).
Blah.
Why security through obscurity fails for most (Score:3)
~luge
It's called free speech (Score:2)
different people have different opinions about disclosing security holes. Clearly, if you know about a hole, disclosing it will warn people of a problem that bad guys might already know about, but it will also tell the bad guys about something they might not know about. There is no right answer to what the best policy is. In one circumstance it might help you. In another, it might harm.
But, if you believe in free speech, and freedom to explore, and in preserving diversity of opinion, live with it. I will note that the folks who complain most bitterly about disclosure are the companies who sell the buggy software, not their customers who are at risk.
Re:he has a point - but it's misinterpreted (Score:2)
Suppose a security flaw is found, but no code is made publicly available. So the vendor has the developers create a patch. The developers create a simple test case, make sure that patch works against it, and ship it. Suppose now that the test case they used was inadaquate. Maybe it didn't fully use the exploit, or maybe there was a bug in it causing it to only use the exploit in one fassion.
After a few months, all of a sudden, some sys-admin finds that his "patched" system has just been cracked when a black-hat created a program - that properly implemented the exploit.
Coding demonstrations of how the exploit works can be very benificial anyway, because presumably these have been tested and debugged to fully demonstrate the flaw. It allows the developers to be completely certain that the patch actually completely and fully patches the system. Besides, if most of the script kiddies are going to use that version anyway, it cuts down on the chances of a blackhat writing a new one that gets around the fix. (After all, why bother reinventing the wheel?) It also means that there are fewer potential methods of using an exploit, so even if the patch is inadaquate, it'll work against the most common script.
It's also nice to be able to look at code and see exactly how the exploit works. Fixing a problem when you can test it against a known working exploit is much easier than needing to first write a test program to do the exploit and then fixing the machine against that test.
The real problem with having a test case is when a company takes two months to release the patch while countless sys-admins are finding their servers are getting broken into. Then again, maybe if there was no demo, the sys-admins would get some time off before some black-hat coded a version anyway.
Trust me, there are people out there who would program an exploit they discovered off a security advisory just to see if it works. They want to show off those new m4d h4x0r 5k177z they learned in their CS class, and creating and releasing a program to exploit a known (but demoless) security flaw is one way to do that. All no public demo would do is buy some time. And I don't think it would even be much time.
Re:Consider the Alternative to Full Disclosure (Score:2)
A good advisory should include workaround information. If the person reporting the vulnerability can't do this, then perhaps he needs to pass his information on to a qualified security firm who can.
Anyone capable of writing a script-kiddie-compatible exploit should be quite capable of providing detailed information and a workaround/fix without necessarily releasing an out-of-the-box root exploit to every kid on the Internet.
As you are probably aware, poorly written "advisories" on BugTraq are typically followed up in short order with something with significantly more information (in quality and/or quantity).
If nothing else, if the author of the advisory feels a code example is required to accurately describe the nature of the bug, at least make the reader work to get a working exploit out of it. You can publish example code without publishing a functioning exploit.
IDS dependance? (Score:2)
Report your new vulnerabilities to NFR only so they can keep it a secret and roll it into the next signature file version? Hmm strike NFR off my IDS solution list.
Like those guys at NFR aren't all subscribed to bugtraq.
Re:Consider the Alternative to Full Disclosure (Score:2)
Of course, if an exploit already exists "in the wild", you're not hurting anybody much more by posting it to an appropriate forum like BugTraq. At this point worrying about full disclosure is rather moot.
And regardless, every possible bit of code, example and exploit should always be sent to the vendor first (even if it's just a few hours, if it's urgent that the information be publicized as quickly as possible). It's damn inconsiderate to post something publicly without giving the vendor any time at all to prepare a response, fix or workaround (and this goes back to my point about sending information to a qualified security firm before blindly posting incomplete/inaccurate information.. The vendor could easily be considered "qualified" in this respect, IMO).
Middle ground? (Score:2)
Can't there be a middle ground? Can't we disclose enough information to accurately describe the problem, workarounds and/or fixes (preferably from the vendor itself, in the case of vulnerabilities not yet "in the wild") without publishing script-kiddie-compatible exploits that run right out of the box?
Security (Score:2)
If your system is B1-compliant, who CARES if a script-kiddie has a PERL script capable of stress-testing a Linux box?
If you're worried that the locks can't tell the difference between a key and a can of coke, then sure, obscurity is effective. But if you KNOW that your systems have bullet-proof authentication and that holes simply don't exist, then WHO CARES WHO KNOWS WHAT????
The fact is, a lot of "armchair experts" -don't- bother giving their code even a cursary audit, never mind anything stringent. If it compiles, run it and worry about the holes when Swiss Cheese companies start suing.
Even if there ARE holes, though, the environment SHOULD ensure that everything is in water-tight, sealed compartments. You mean, it doesn't? Then stop whinging about skript-kiddies and start coding! Side-effects, over-runs, undocumented pathways, etc, should NEVER be capable of accessing data or code that is not explicitly associated with that application.
Last, but not least, does this "expert" think that OB1 is any less secure for being available? Or is it MORE secure, for being checkable and usable by people other than SGI?
Tools should not be distributed (Score:2)
Security holes should be published though because it is the only way to prompt vendors and software authors to fix the holes. It also alerts users to potential security risks so that they can choose another product, defend themselves some other way, or look for the patch.
So the tools to exploit holes should probably only be distributed to a select few who are capable of fixing the problem and the problem should be published to prompt them to do something about it and to inform the public. Unfortunately, many people producing these tools are often doing it for their own egos.
Re:He's missing the point. (Score:3)
Hmmm
Chris
Wishful thinking (Score:3)
All you can do is go back to the Bad Old Days of closed source cathedral systems and hoping to ghod the vendors get around to fixing their systems some day, because the social structures that surround crackers and kiddiez give you higher status if you are among the first to propagate a new crack. When one of them knows, they all know. It's the same with other group now--crackerz, Tori Amos fans, just whoever. If you have the info, you share it ASAFP and bask in the glory of being the first to break the story.
--
More scripts! Really! (Score:3)
Now that I've said something that everyone will agree with, let me explain why everyone else's comments are also wrong (or at least all of the ones not moderated down under 3).
I'm saying this as a data security consultant, and yes, it's my real job. I need, as soon as possible, to see the exact technical details of every new exploit. If someone has written an attack script, I need that too. Why? Any IDS that's worth the HD space it takes up allows you to write custom rules. If I know exactly what a given attack is going to look like, I can write very efficient rules to report/stop it. If I don't, I may have to guess what this attack looks like, or leave myself unprotected. Full disclosure reporting is the ONLY thing that provides this type of response for me, the guy who's really doing the work.
MS discloses nothing so they must be unhackable! (Score:2)
Lack of full disclosure means the security holes live much longer and propagate until they're very wide spread. Then some 5kr1p7 k1dd13 stumbles on an exploit and suddenly the bulk of the world's computer users is vulnerable.
Full disclosure may reveal vulnerabilities earlier. But that's the time to plug them and it gets done much more quickly.
Re:The myth of many eyes (Score:2)
Re:He's missing the point. (Score:3)
The equivalent of a "script kiddy" as applied to the auto industry of days past would be a driver who deliberately caused fatal auto accidents to expl0it the safety problems. Script kiddies don't actually find security problems; they just use crax0rz provided by grey hat sources (or by more knowledgeable black hats) to exploit the weaknesses. No thinking required.
New sets of problems... (Score:2)
potato... Potatoe (Score:2)
You're right. Nobody actually reads the source code. Ever. Somebody writes it, then it's never read again, by anybody else.
Certainly aspiring young software engineers would never read the code to find out just exactly how an OS works. Certainly I never did that. And I'm also quite positive that I never read the source to a driver because I needed to find out why our custom version of the same network card wasn't working. Nor did I read the source to various system utilities when they didn't behave as expected.
In conclusion, open source is doomed to a buggy abyss, only a commercial closed source release system, with a QA department can provide us with good, quality software, like Windows ME.
----------------------------
Security Through Threat (Score:3)
Okay, but what about the idea that they could be kept quiet for just a little while, while the good guys get it fixed? I think the STT people have decided that things don't work that way. Remember how effective it was when all the programmers quietly went to management and told them that there might be some problems coming up if they didn't start converting to four-digit dates? It took publicity and widespread fear before most businesses started putting serious resources into Y2K conversion, and it's not unreasonable that the same is true of security holes. Tell them that there's a potential problem, and get the runaround while the money goes to more immediately profitable things. But if the populace is whipped up over the prospect of another Melissa, there will be action.
I don't think that these grey-hat types are unaware that they're responsible for a lot of kiddie attacks. But perhaps if the kiddies are a force of nature, unstoppable by law or society, software companies will have no choice but to write good products, with competent security audits and up-to-date patches. That's a goal I can see someone willingly enduring a bunch of 1337 bullshit for.
- Michael Cohn
Some of the things that need to be done... (Score:5)
Egress filtering. Yep, it's argued earlier in the iTrace story...but it is a good idea. Perhaps a mandatory requirement that no ISP passes traffic that isn't in there IP allocation. (there is *no* good reason for routing somebody else's IPs, right?). Yeah, there might be an issue with speed of filtering, but it really is the only way to prevent havoc. (oh, and iTrace is a step in the right direction too...at least a temporary one)
Malicious activity should be viewed as just that. DoS'ing, cracking, exploiting, rooting, sniffing should all be classified as illegal, and penalties must be established. Although the cost of tracking down perpetrators is high, the increasing number of these l337 scr1p7 k1dd13s is only going to cause more and more financial loss, especially as the Internet becomes more ingrained in society. Cracking system (even if there is no financial loss) should still be viewed as the intrusive crime that it is, and should be prosecuted. (of course, that's very difficult across borders, but something *must* be done...)
Relying on obscurity to provide any level of security is a bad idea. There are talented people who can find flaws in any closed system, given enough time and effort. But this is no excuse to start handing out information that doesn't need to become public. A source code example isn't required to demonstrate a flaw to the public, so it doesn't need to be distributed.
Re:Bah.. (Score:3)
Sure, Windows will *never* be bug free, and it's silly to expect that it will. Especially when you consider all the things that people think of as parts of windows, from the essential like ScanDisk and the Networking to the mundane like Solitaire...
But, if the OS is well designed, a program like Solitaire could never take out the whole OS when it crashed, so it could be dealt with seperately. Only the core system would need to be rock stable, the rest could be restarted easily. (Beos's networking dies on me every now and then (I'm breaking the rules using two identical network cards) which is annoying, but I can restart it with the click of a button, unlike in MS where I have to reboot.)
Once the system is stable and can't be crashed by a badly written solitaire game, you go on to bug-fix the important parts, the external programs, those that deal with the outside world.
Your HTML renderer, your network stack, your scripting, those need to be locked down.
A smart designer can tell what parts of the system need to be secure and which don't. If the attacker could only get to one bug by already having exploited a larger bug (crash solitaire by using a buffer overflow in networking to execute arbitrary local commands) then the one bug is fairly minor.
Microsoft could secure Windows, at least as much so as BeOS or any other non-multiuser OS, with a little work but they refuse, because it's easier to only fix what has to be fixed.
I agree with the person who said that bugs in products from companies like Microsoft who don't fix the bugs until they make the news, should be made public without warning them... That way they take the biggest credibility hit.
Re:Why Script 'Kiddies'? (Score:2)
Until recently, most Script Kiddies where in college - simply because that was when people received net access. Recently, the 'Kiddies' tend to be younger - but we also have 40 year old Script Kiddies. They're still 'Kiddies' however - because they're acting like immature 8 year olds.
At least when I use the term, please don't take the term "Script Kiddie" as a reference to all young computer techs... instead take it as a reference to those techs who shouldn't be allowed to go to the grocery store alone, much less use a computer.
Another perspective on the speech (Score:2)
Re:Exploit != Fix (Score:2)
why are you looking at every possible source of external data? you know exactly where your code is breaking; you have the exploit right here.
if your code is horribly broken and relying on other horribly broken code, then sure, it takes you a long time to fix it, but would you rather have even LESS information available to you when you start fixing this bug?
He's not missing the one you think. (Score:2)
Why did Microsoft ever have a scripting facility with no security checks? Why do products still have buffer-overflow issues? Sloppy design and coding. Until the bar is raised for the production of software (ala OpenBSD), this will continue.
The REAL problem is that people have no understanding of the origin of these problems. Once it is common knowledge that sloppiness in design is responsible for the Love Bug virus and web-site hacks, people will demand better software and be willing to trade some convenience for security. Current design holes are the equivalent of buttons all over a car which will unlock the doors, un-latch the steering column and start the engine. Nobody would tolerate a car that is so open to theft, and nobody will tolerate software that is equally bad (as so much software is today) once the public is sophisticated enough to know the difference.
--
Re:He's missing the point. (Score:2)
Script kiddies, simply because of their numbers, can't keep secrets. Once a program to root some webserver becomes known, they'll use it to put up a brag sheet, or to run an IRC bot, or something stupid. This means the problem is going to get the attention of the admin, or someone calling the admin to report a DDoS...
I'd much rather that Amazon.com be rooted by a script kiddy posting a brag sheet, or perhaps DoSing EBay for refusing to list his Ultima Online character, than for it to be subtly cracked by someone stealing credit information.
Similarly, we say that in the old days, nobody had to lock their houses because there weren't thieves everywhere. But that means that if someone wanted to get into your house at night for a more devious purpose, they could.
Like it or not, bugs won't be fixed till they're exploited in a public way. I'd rather that way be a bunch of stupid kids playing with scripts than millions of dollars being stolen. Similarly, if there was a problem with a certain type of door lock, I'd rather hear about them being recalled after thousands of minor B&Es instead of just a couple rapes or other serious crimes.
It'd be nice if Microsoft and other big companies would try to fix the bugs as they were reported, but that's unlikely. As long as they can just ignore them, hoping they don't cause problems, and maybe fix it in the next release, they will. We *need* to cause a fuss about each and every exploit so that 'they' have to do something.
As long as it's cheaper to hide behind EULAs (and bribe politicians to pass fucking stupid laws like the UCITA) there's no reason for them to actually fix bugs unless there's enough of a fuss that people might stop buying the product.
Re:Bah.. (Score:2)
You care about his agenda, because you're trying to rip him off, as much as you possibly can. And you're also trying to rip off everyone else as much as you can. Now, he's telling everyone, that your product sucks... From your perspective its not usefull information. Under the terms of most EULAs you aren't responsible for an defects in your product. So information just persuades people to buy someone else's crap, or pay less for yours.
--locust
If it Truly Is Obscure, it may work... (Score:4)
Another view is taken by Terry Ritter, of Ciphers By Ritter. [io.com]
His article Cryptography: Is Staying with the Herd Really Best? [io.com] questions that; his view is that there should be a framework for there to be a rich set of ciphers in use, and that systems should readily, and dynamically, be able to shift to new ones should an older one be broken.
There are, widely stroking with the brush, two major approaches to security:
It is fairly well guaranteed that the armour will prove challenging to would-be attackers, whether we're talking about a crypto system, or a B1-certified version of Unix.
Unfortunately, since such systems are big, heavy, and complex to assemble, if they do have weaknesses, they will prove extremely vulnerable to attack at that weak point.
Gazelles are not heavily armoured; they depend on moving quickly to avoid capture by those that would eat them.
More importantly, they are "physically independent." If a lion is busy chasing one gazelle, he can't catch any of the others.
The history of major Internet security breaches demonstrates that putting all the eggs in one "pot" is dangerous:
Ranum sounds cornered, but he's not right (Score:5)
Full disclosure allows people responsible for security to verify vulnerabilities, patch holes, etc. The no-disclosure alternative leads to an unknown mass of hackers, out there trading amongst themselves. It will not stop distribution, even to kiddies, who will spend endless hours on #supah_hot_shells on irc pining away for a new tool. Meanwhile, with no public disclosure, who will protect us?
You guessed it, Network Flight Recorder. It, and a cadre of other companies like it, will share their secrets with each other under the blanket of draconian NDAs.
Part of the problem is just that we've recently had a lot of distributed dos attack "exploits". The problem being, you can prevent yourself from being part of it, but you can't prevent yourself from being a victim of it. There's nothing worse that running a tight ship, tuning your box(es) to be safe, and then eating 200megs of smurf because some user with a shell on your machine kicked some flooding fool off #stay_away_flooders.
Still, the smurf problem (and those like it) are not insurmountable, and people are now aware the problem must be dealt with in an automated way, and they're working on it. Meanwhile, law enforcement will grow more adept at tracking this sort of thing. As many people have pointed out, few connections to the net are truly anonymous. Meanwhile, cooperative logging will grow more likely. Logs will stream offsite immediately to a super-safe host, so even if you break into a system, your tracks are set in stone, etc. Meanwhile, those of us who just want safe boxes can keep them safe.
Yes, source code for exploits should be released. (Score:3)
For instance:
(1) I write code that makes many calls to the linux kernel.
(2) I am unfamiliar with the intricacies of the kernel.
(3) I find a call with some messed up parameters that inadvertently gets me root access.
(4) I do not know how to fix it in the kernel, but other people do. Maybe I just don't have time to spend fixing the kernel.
(5) So, I **MUST** release the code showing what I did so others who do know the kernel well can fix it.
And vague descriptions by me of what I did only slows the creation of a patch because I may articulate incorrectly or leave out vital info about stuff I'm unfamiliar with. The source code is always true.
Wrong analogy (Score:3)
Brand-X locks have a defect which means they can be opened by anyone inserting a screwdriver and turning it. The manufacturer knows this, but doesn't say anything. This is security through obscurity.
Thieves (script kiddies) have discovered through experimentation the screwdriver trick. They occasionally wander down your street, looking for Brand-X locks.
If the manufacturer of Brand-X locks were responsible, they would put out a recall notice and replace all the defective locks. But they aren't, so thousands of homes are broken into by thieves who have learned this exploit. Many homeowners learn of the defect from the police after the fact.
Once a news report is published showing the fault with Brand-X, many of the consumers clamor for replacements. Eventually the manufacturer gives in and provides new locks to those who ask for them, putting a positive spin on the whole sordid affair. Draw your own parallels with this action
the AC
Re:The myth of many eyes (Score:4)
The 'many eyes' mantra isn't a mantra... it's sound reasoning that has been demonstrated to be effective. Yes, it does actually require many eyes, and more importantly for people to pay attention when the mouths associated with those eyes speak up about a problem -- which is the point. Get rid of the lazy sysadmins, and the article wouldn't have been written.
What I can only hope is that the spat of low-effort cracking and script kiddies will serve as a wake-up call to those people who think these problems will just fix themselves. To an extent, it seems to have, but I hope they don't react by trying to close the source to protect their own laziness.
To a limited extent it has worked -- I'm paying more attention to security now myself. and I'm definately a lazy admin (but I don't make any money off my administration efforts, and it's just my box ^_^)
Re:Why Script 'Kiddies'? (Score:3)
People are afraid of things they don't understand. This is very evident when dealing with computers, especially after Nightline runs bits like "How to Protect Yourself from Hackers."
We call them kiddies because they show the knowledge level of a child. They don't understand how things works, and they don't care. It's like an 8 year old with a bazooka. "Hey, billy, check this out, I can get inside anybody's house on the street! Weeee!"
You are obviously not a 5kr1p7 k1dd13, so don't worry about it. No one on the 'net will EVER know your are 15 unless you tell them (or unless you act like it). I've been reading your posts for a while, and you sound like a resonably intelligent human being. Also, the term script kiddie isn't know outside of the nerd half of the 'net. You will probably never find yourself hanging out after school and a bunch of bullies come up and yell "Hey, Script Kiddie! You Suck!" To sum up: If you know you don't fit into the sterotype, that should be enough to be able to convice others that you're not in the sterotype (I think i got that right). So don't worry about it. And keep "using your computer knowledge for good." Don't joing the dark side
-Superb0wl
Why provide ready scripts? (Score:4)
The only argument for giving away such scripts is to exert pressure on a company that totally ignores announcements of bugs otherwise and will only react when critical comments start to effect their product sales. I think the fairest way would be to give the company some headstart to fix the hole so they can provide the fix with the report (which should honor the finder of the bug for his efforts) and proceed to publish the hole on some open forum after a few days. If the company chooses to ignore the bug it will only make them look worse later. There is no need to add a script to the exploit as these will sprout up anyway as soon as the hole is known.
Security through Prudence (Score:4)
Security itself is mostly common sense, you need to know whencertain actions are good.
Now we published our security setup on slashdot, mostly because its nothing special in and of itself. However I don;t publish our *policy* and implementation.
Why? its just never a good idea, its common sense.
I hate the words "Security through Obscurity", mostly because, sometimes its "Security Through Prudence".
There are certain times I should never disclose actual implemenattion of a security plan. It would, in essence make it easier for those who are not even "grey hats" (not that I don't trust you slashdotters
Network Engineers, Software Engineers, Security Engineers, they are nothing if but human. Now if you privately notify them about a hole and they refuse to do anything about it, or even acknowledge it, thats a *different* story.
the other thing thats important in this is peer review. A team of people should be in on implementation, its the same way with code. There should always be someone reviewing your work, or else you could FUBAR everything.
So lets forget security through obscurity, if being obscure is your only protection, then you are stupid. If prudence is the reason, simply until you can fix something known by you, then, maybe thats a good idea.
-Pat