How Does Heartbleed Alter the 'Open Source Is Safer' Discussion? 582
jammag writes: "Heartbleed has dealt a blow to the image of free and open source software. In the self-mythology of FOSS, bugs like Heartbleed aren't supposed to happen when the source code is freely available and being worked with daily. As Eric Raymond famously said, 'given enough eyeballs, all bugs are shallow.' Many users of proprietary software, tired of FOSS's continual claims of superior security, welcome the idea that Heartbleed has punctured FOSS's pretensions. But is that what has happened?"
Leaked by codenomicon (Score:5, Interesting)
Re:Leaked by codenomicon (Score:5, Interesting)
I had to dig for direct connections between Codenomicon and Microsoft, but the chairman of the board seems a fairly strong link. The way Codenomicon have behaved in this has seemed reckless, I've never seen a bug so heavily marketed. The stats floating around initially seem to be way off the mark - to begin with quotes were of 66% of web servers being affected, later revised to 17% running affected versions. Both these numbers look too round to be anything other than made up.
Re:Leaked by codenomicon (Score:5, Insightful)
Anyone who would claim that proprietary software is somehow more secure is making a huge leap - there are only a few eyes, if any, looking for unreported issues - so there may be even more serious issues which have existed for much longer, which only a few bad guys know about. If MS or anyone else thinks that their proprietary SSL implementation has no security breaches, let them put a guarantee with full financial liability behind that thought.
Re: (Score:3)
Gloat? About what? This only provides proof of the benefits of open source - a significant flaw was discovered, which is exactly the claimed advantage - the more eyes, the better.
But it wasn't found by eyes, in the source. It was found by automated testing tool that would have just as easily found it in closed source.
Wat? (Score:5, Insightful)
In the self-mythology of FOSS, bugs like Heartbleed aren't supposed to happen when the source code is freely available and being worked with daily.
False. Bugs can and do happen. However, what can also happen with open source software is that entities other than the group working on the project can find bugs. In this case, Google found the bug. If the source were not open, maybe it would have never been officially recognized and fixed.
Re:Wat? (Score:5, Interesting)
The problem here is that people have been using the argument that Open Source is better because these issues can't happen "because" of the visibility. And the argument "Open Source is inherently safer" has been very heavily damaged by Heartbleed and now ranks up there with "Macs don't get viruses" and "Women are worse drivers".
If this happened in Microsoft, Adobe or Oracle Land this would be "yet another 0-day" and largely ignored by the public. Because it is in an area with such a vocal group of people spouting "Impenetrable" for decades, it all of the sudden becomes quite newsworthy in a way that "yet-another-remote-code-execution-with-privilege-escalation-in-Acrobat-Reader" vulnerability doesn't.
And if you doubt any of this for a moment, have you ever heard the name of the developer who was at fault for introducing a bug into Flash on the local news? Now did you hear the name "Robin Seggelmann" in connection to Heartbleed?
Re:Wat? (Score:5, Insightful)
No, just no. No one with any sort of a clue ever argued these issues cannot happen with Free Software. It's good practice, it helps, but it's no silver bullet. That's just as true as it ever was and this news in no way contradicts that.
Re:Wat? (Score:5, Insightful)
No, just no. No one with any sort of a clue ever argued these issues cannot happen with Free Software.
No, they haven't made that claim in so many words. But they've sure as hell implied it for years now. That's the whole line of thought that Raymond's statement (quoted in TFS) is based on.
Huh? The quote is "given enough eyeballs, all bugs are shallow." That's a clear admission that open software, like all other software, contains bugs; that's why you want the many eyeballs. Any claim otherwise is a symptom of not understanding plain English. Eric's whole point was that the bugs in open software will be found and fixed faster than the bugs in other software, due to the population of interested people who will study it, looking for the bugs. Nothing in that quote implies (to anyone with reasonable understanding of English and basic logic) that open software doesn't have bugs. I expect Eric would just chuckle at the very idea of software without bugs.
(Actually, someone near him should ask him. Tell us whether he chuckles, or snickers, or just gets a sad look on his face. Or maybe he'll say "Well, there is a conjecture that bug-free software exists, but in has never been observed in the field by reliable observers." ;-)
A much more useful conclusion from this story (if you're serious about computer security) is that this bug has been found and fixed in OpenSSL, but with its proprietary competitors, we have no way of knowing what horrible exploits they may be hiding. And you'd be a dummy to think they don't have exploits; every chunk of security-related software has exploits. The meaningful question is whether they can be found and fixed by the people using the software. If not, you'd be a fool to use that software.
Eyeballs did not find bug ... (Score:3, Informative)
The quote is "given enough eyeballs, all bugs are shallow." That's a clear admission that open software, like all other software, contains bugs; that's why you want the many eyeballs. Any claim otherwise is a symptom of not understanding plain English. Eric's whole point was that the bugs in open software will be found and fixed faster than the bugs in other software, due to the population of interested people who will study it, looking for the bugs.
Perhaps it is not being stated clearly but the point that you are missing is the fact that this bug in some of the most critical network software in use had been around for 2 years. This fact demonstrates the hyperbole of the quote. Its a well crafted quote, illustrates a concept well, but people read way too much into it. Few FOSS users are developers, few developers are qualified readers. Eyeballs are a plus, but not a panacea. The gap between proprietary and open exists but it is exaggerated.
A second
Re: (Score:3, Insightful)
Sadly, straw men dominate this discussion. Thank you for seeing them for what they are.
Re: (Score:3)
"Open Source is inherently safer"
Yes.
Open source absolutely safe?
No.
Re:Wat? (Score:5, Insightful)
The problem here is that people have been using the argument that Open Source is better because these issues can't happen "because" of the visibility.
The visibility doesn't make it so bugs don't exist. It makes them more likely to be found. This one existed and was found.
And the argument "Open Source is inherently safer" has been very heavily damaged by Heartbleed and now ranks up there with "Macs don't get viruses" and "Women are worse drivers".
The argument "seatbelts make riding in a car safer" is not "heavily damaged" by someone dying in a car accident while wearing a seatbelt.
Imagine this code was closed source. Whats the desired outcome? That hackers never stumble upon the bug and the it goes unnoticed forever, and therefore never needs to be fixed?
Proprietary or open seems irrelevant to discovery (Score:5, Informative)
The visibility doesn't make it so bugs don't exist. It makes them more likely to be found. This one existed and was found.
After two years in the wild. And apparently *not* by eyeballs on source code. Proprietary or open seems irrelevant to this discovery.
"“We developed a product called Safeguard, which automatically tests things like encryption and authentication,” Chartier said. “We started testing the product on our own infrastructure, which uses Open SSL. And that’s how we found the bug.”"
http://readwrite.com/2014/04/1... [readwrite.com]
Re: (Score:3)
The visibility doesn't make it so bugs don't exist. It makes them more likely to be found. This one existed and was found.
I see another lesson here. We (i mean, people in the IT industry) rely on ultra sensible piece of code like openssl, and we blindly use it. We don't question much about how the way this software is created and by who. That's the problem. We put our trust on something we know very little about. Discovering the small team coding openssl is quite a surprise to me. I feel really ashamed to discover this that late. How stupid is that... The feeling that "because so many smart people use openssl must imply strong
Re: (Score:3)
Because OpenSSL is such a common tool and is arguably vital to the function of the Internet as we know it, this sort of a bug really is one of those "worst case scenarios"
True, but the main lesson to learn from it can be summarized by the old cliche saying "Don't put all your eggs in one basket". The warning about a "monoculture" also applies here. If one specific piece of software is universally used, even a minor bug in it can be a widespread disaster. If people had any sense, the very fact that something is so popular and widespread would be a strong argument for duplicating its functionality with independently-developed code.
Of course, in reality we humans tend to
Re: (Score:2, Insightful)
True, but it is also easier for malicious people to find vulnerabilities when they have the source code. There are other disadvantages, a broad developer base allows vulnerabilities to be deliberately introduced more easily and it's harder to enforce standards etc.
I searched and couldn't find a good study or any reliable evidence either way. There is good and bad open source software and there is also good and bad commercial software. Posting with absolute certainty that open source is more secure will get
Re:Wat? (Score:5, Insightful)
Re: (Score:3)
Well, you wouldn't start by reading millions of lines of code but it certainly helps to have access to it. Especially for people with serious resources, governments etc.
Re: (Score:3, Insightful)
Correct -- I could imagine that there are lots of "heartbleeds" in closed source software that can and will be exploited. Whether it becomes public and puts pressure on the development staff to fix, is another story.
Re: (Score:3)
All source is open if it's worth it to someone.
That's what disassemblers are for.
I reverse-engineered the old Microsoft assembler for CP/M to give it an advanced feature it lacked and did it strictly on my own time and for my own private benefit (pre-DMCA).
You can be certain that open or closed, SOMEONE whose business is penetrating security has people dedicated to ensuring that there's source code to pore over for exploits.
Re: (Score:3)
And sometimes it took months or years for any patches to come out, sometimes never.
I remember the first internet worm that attacked via sendmail. The thing was that a core group of insiders knew about the bug and had patched their systems, but the larger community of sysadmins had no idea that the vulnerability existed. It was especially a problem for those systems were the people operating it relied only on official documentation from vendors and who didn't hang out on usenet or at conferences. It wasn'
Re: (Score:3)
yes but they would not necessarily be able to patch the closed code
Mr Fixit (Score:5, Insightful)
That's fine with me.
Re:Mr Fixit (Score:5, Insightful)
That it reacts fast is good. That the bug could be audited in the source, in public, is good.
We should remember that FLOSS reacted very quickly to the "revelation," but the bug itself has been sitting there for years, which isn't really supposed to happen.
It's nice we know how long it's been there, and can have all kinds of philosophical discussions about why the OpenSSL folks decided to write their own malloc.
Also OpenSSL was effectively a monoculture and just about every SSL-encrypted internet communication over the last two years has been compromised. OpenSSL has no competition at its core competency, so the team really has no motivation to deliver an iteratively better product, apart from their need to scratch an itch. FLOSS software projects tend not to operate in a competitive environment, where multiple OSS products are useful for the same thing and vie for placement. This is probably bad.
Re:Mr Fixit (Score:4, Insightful)
" just about every SSL-encrypted internet communication over the last two years has been compromised."
No, it really hasn't.
It's accurate to say that just about every Open-SSL encrypted session for servers that were using NEW versions of OpenSSL (not all those ones out there still stuck on 0.9.8(whatever) that never had the bug) were potentially vulnerable to attack.
That's bad, but it's a universe away from "every SSL session is compromized!!!" because that's not really true.
Pedantic Man to the rescue! (Score:4, Informative)
" just about every SSL-encrypted internet communication over the last two years has been compromised."
No, it really hasn't.
It's accurate to say that just about every Open-SSL encrypted session for servers that were using NEW versions of OpenSSL (not all those ones out there still stuck on 0.9.8(whatever) that never had the bug) were potentially vulnerable to attack.
That's bad, but it's a universe away from "every SSL session is compromized!!!" because that's not really true.
They were vulnerable to attack, that is to say, the security was compromised. He didn't say they were hacked, stolen, eavesdropped, or surreptitiously recorded.
compromise [reference.com]: to expose or make vulnerable to danger, suspicion, scandal, etc.; jeopardize: a military oversight that compromised the nation's defenses.
I've noticed that a lot of TV sci-fi confuses "compromise" with "breach"; as in hull, shields, defenses, etc.
Re: (Score:3)
OK so if you are using "compromise" to mean "Every SSL session in the past 2 years was potentially vulnerable to danger", then I guess that's true in the sense that almost every computer is compromised since there are probably many unnoticed security holes in just about every OS and commonly used library.
Re: (Score:3)
Debian was a bit longer, so far as mainline releases go (I don't use testing branches). I have several servers and routers running 6.0, and they're all using OpenSSL 0.9.8, whereas my servers I use as KVM virtualization hosts are running Wheezy and did have vulnerable versions of OpenSSL. I had been thinking over the last few months that I should upgrade my old Debian Squeeze servers and appliances, a number of which are used for my OpenVPN WAN routers and remote client servers. I'm very glad my business/pr
Re: (Score:3, Interesting)
Which is a ridiculous statement to make in this situation. That's like patting your security company on the back for not noticing for two years that someone was secretly stealing money out of your bank vault and they only did something after being told by a third-party that there was a problem. But hey they reacted fast two years after the fact, right?
we don't know what happened AT ALL (Score:3, Insightful)
Yes, we can trace the changelogs in the software & note who was checking the changes and missed them, but that all can be circumvented.
The fact is we don't know if Heartbleed was an honest mistake or not...we don't know who knew and when...we don't know alot
FOSS is nowhere in the conversation, btw...this has absolutely nothing to do with the fact that this was Open Source project.
Private company's products have ridiculous security issues...comparing this to that is not helpful.
Re:we don't know what happened AT ALL (Score:5, Informative)
"Yes, we can trace the changelogs in the software & note who was checking the changes and missed them, but that all can be circumvented."
Actually it can't. That's kind of the point of git.
"The fact is we don't know if Heartbleed was an honest mistake or not...we don't know who knew and when..."
We do know who and what and when, because the person who wrote it and the person who signed off on it have commented publicly about the bug.
Maybe you're thinking of Apple's "goto fail" SSL exploit where we really don't know who or what or when and probably never will because it's not likely Apple is going to release their RCS logs.
Re: (Score:2)
Well we know the when and we know the what .
http://www.theguardian.com/tec... [theguardian.com]
Re: (Score:2)
> Actually it can't. That's kind of the point of git.
Unfortunately, many git users keep their SSH keys unencrypted on their local hard drives or on network accessible home directories. This means that a careless git admin may have their SSH keys stolen by quite amateur crackers, and leave the public repositories open to quite malicious changes. I've had precisely such discussions with personnel who insist that they trust the people they work with and they have a firewall, so they're not at risk.
Even a bestselling novel can have a typo (Score:5, Insightful)
More eyeballs usually do make bugs more shallow, but only if the eyes know what to look for.
Re: (Score:3)
More eyeballs usually do make bugs more shallow, but only if the eyes know what to look for.
And only if a significant number of sophisticated and knowledgeable eyes have the time and interest to dig through lines and lines of code looking for vulnerabilities.
The reality is that the majority of eyeballs looking at code are the ones that have other reasons to be looking at it. They aren't necessarily looking for vulnerabilities but maybe they spot something.
The eyes that might be interested in scouring code looking for vulnerabilities could be the ones wanting to exploit them rather than fix t
Re: (Score:3)
The 'millions of eyeballs' meme is just that. How many people actually know how to read code? Just b'cos it's open doesn't mean that it's comprehendible, and therefore, the fact that the code is open & out there doesn't have that much of an advantage, particularly when it's such complex code.
Re: (Score:2)
Wait until things are over before you cry wolf (Score:4, Insightful)
Nobody is going to discard OpenSSL due to this - the majority of people are patching systems and reminding people that security is important (a side benefit of this incident)
The next step will be when someone puts up the money for a proper code review of the OpenSSL codebase and fixes up any other issues that may exist.
It's reasonable to say that there are more people and organisations able to resolve this issue than if it were a closed source proprietary solution.
Security is hard. Encryption is even harder. (Score:2)
All this episode does is to remind us that security is hard. Encryption is even harder.
Original premise is false (Score:5, Insightful)
What we need are intelligent bots to constantly trawl source repositories looking for bugs. People just don't have the time any more.
Re: (Score:3, Insightful)
What we need are intelligent bots to constantly trawl source repositories looking for bugs.
If we had bots that intelligent they would be intelligent enough to write the code without bugs.
Re: (Score:2)
Re: (Score:3)
This code could have easily been detected with static analysis. It's a common failure pattern. You just taint data from the network as untrusted and look for when invalid use cases.
I do static analysis like this on the linux kernel for a living.
Re: (Score:2)
Many eyeballs may make bugs shallower, but those many eyeballs don't really exist. Source availability does not translate to many people examining that source. People, myself included, may like to build to install packages but that's it. What we need are intelligent bots to constantly trawl source repositories looking for bugs. People just don't have the time any more.
Not just that, the only people who'd find such bugs are the people actually working on those programs. Usually, not their downstream users.
Re: (Score:3)
Re: (Score:2)
I don't think Heartbleed says anything fundamental about open source security, but it might alter the discussion of how certain low level packages are managed. By any measure OpenSSL is a very important package, but it's also a bit generic. It has a very defined role that everyone needs, but I'm not sure how many people really have a motive to work on it in specific. It might be that the community needs to find a way to devote more resources to maintaining and auditing those packages.
Re: (Score:3)
There are 7 billion people in the world. It doesn't take a large percentage of people to look at the code for there to be a large number
I'm sorry, but that's a really silly argument. You can't create a significant number of people doing a particular thing by doing the 'big number times small number = medium size number' trick. We hear that from marketing here at work, and it doesn't make any sense there either.
Overstating the case (Score:5, Insightful)
I don't think anyone claims that open-source software won't ever have security issues. The claim is that the open-source model tends to find and correct the flaws more effectively than the closed-source model, and that the soundness of the resulting product tends to be better on average.
One case does not disprove that. The key words there are "tends" and "on average".
Re:Overstating the case (Score:4, Insightful)
FOSS is still safer... (Score:3)
How do we know that serious security flaws don't exist in the SSL implementations used by Microsoft or other proprietary vendors?
It doesn't. (Score:5, Insightful)
Anyone can view the source of an open source project, which means anyone can find vulnerabilities in it. Specifically, hackers wishing to exploit the software, as well as users withing to audit and fix the software. But, someone who knows what they're doing has to actually look at the source for that to matter; and this rarely happens.
Hackers must black-box closed source software to find exploits, which make it more difficult than finding them in open source software; the flip-side is that they can only by fixed by the few people who have the source. If the hacker doesn't disclose the exploit and the people with access to the code don't look for it, it goes unpatched forever.
Open source software does provide an advantage to both sides, hackers can find exploits more easily and users can fix them more easily; with closed source, you're at the mercy of the vendor to fix their code but, at the same time, it's more difficult for a hacker to find a vulnerability without access to the source.
Then, we consider how good fuzzing techniques have gotten and... well, as it becomes easier to find vulnerabilities in closed source software, open source starts to look better.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:It doesn't. (Score:4, Interesting)
This myth gets trotted out again. It is arguably easier to find exploits without source. The source distracts from the discovery of an exploit. The binary simply is. The black-hat is looking for a way to subvert a system. Typically she is not interested in the documented (by source or documentation) functionality. That simply distracts from the issue which is finding out what the software actually does, especially in edge circumstances.
This is what fuzzers do. Typically not aware of the utility of the program, they simply inject tons of junk until something breaks.
Source availability tends to benefit people auditing and repairing more than black-hats.
Yes, it took years for heartbleed to surface. If heartbleed (or a defect like it), was discovered due to a code audit, that speaks to the superiority of open source over closed source. If this defect is found by fuzzing or binary analysis, it is much harder to repair, as users are now at the mercy of the holder of the source. Build a matrix of Open/Closed Source vs. Bug found in Source, Bug by fuzzing/binary analysis.
Bug found in source vs Closed Source is not applicable, giving three element. Found in source vs. Open Source (where the bug will be repaired in the source by anyone). Bug found by fuzzing... where the bug will be repaired in the source by anyone (Open Source) or the Vendor (Closed Source).
The question then is (as I started the article): Is it easier to find bugs by source inspection? Assume big threats will HAVE the source anyway. If it was easy to find by inspection, it would be easy to fix (for examples: OpenBSD continously audits, and security has been a priority at Microsoft for the past decade). Fuzzing and binary analysis is still the preferred (quickest) method, giving the edge to Open Source. The reason is simple -- the black-hat cares about what is actually happening, and not what the source says is happening.
Not enough eyes (Score:5, Insightful)
So, the "with many eyes all bugs are shallow" notion fails. There were not enough eyes on the OpenSSL library, which is why nobody discovered the bug.
Except that someone did discover the bug, when they were looking at the code because it was open source. And they did report it. And it did get fixed. Later than anyone would want of course. But it happened. Maybe the similar errors would and are being missed in the Windows and Mac implementations.
Re: (Score:2)
That said, if I were a bad guy, the rest of the OpenSSL library's open source would seem like a pretty juicy read right now. Then again, it probably sounds like fun to the good guys too.
Do you think that the bad guys never thought to read the source before now? I'm sure the NSA has a dozen more exploits against every SSL implementation out there, open or closed. It isn't like the Germans published the specs for Enigma, or that the Iranians posted the design of their centrifuges for all to see.
Looking forward (Score:2)
The issue is not that some open source software has a bug in it. We're all grown-up enough (I hope) to realise that NO software is ever perfect.
The only interesting point about this situation is how the Open Source world reacts to it and what processes get put in place to reduce the risk of a similar situation arising in the future.
Uh, what? (Score:5, Insightful)
Q: How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?
A: It doesn't. OSS is purported to be a *better* software development methodology. "Better" != "perfect". TFS is a troll.
This bug was found in OpenSSL because it was open (Score:5, Insightful)
What hasn't been found in closed source software because it is too inconvenient to look?
What if... (Score:4, Insightful)
How would proprietary software have handled this? (Score:5, Insightful)
This doesn't really change it, because think how a proprietary SSL library would've handled this. The vulnerability was found specifically because the source code was available and someone other than the owners went looking for problems. When was the last time you saw the source code for a piece of proprietary software available for anyone to look at? If it's available at all, it's under strict license terms that would've prevented anyone finding this vulnerability from saying anything to anyone about it. And the vendor, not wanting the PR problem that admitting to a problem would cause, would do exactly what they've done with so many other vulnerabilities in the past: sit on it and do nothing about it, to avoid giving anyone a hint that there's a problem. We'd still have been vulnerable, but we wouldn't know about it and wouldn't know we needed to do something to protect ourselves. Is that really more secure?
And if proprietary software is written so well that such vulnerabilities aren't as common, then why is it that the largest number of vulnerabilities are reported in proprietary software? And that despite more people being able to look for vulnerabilities in open-source software. In fact, being a professional software developer and knowing people working in the field, I'm fairly sure the average piece of proprietary software is of worse quality than the average open-source project. It's the inevitable effect of hiring the lowest-cost developers you can find combined with treating the fixing of bugs as a cost and prioritizing adding new features over fixing problems that nobody's complained about yet. And with nobody outside the company ever seeing the code, you're not going to be embarrassed or mocked for just how absolutely horrid that code is. The Daily WTF is based on reality, remember, and from personal experience I can tell you they aren't exaggerating. If anything, like Dilbert they're toning it down until it's semi-believable.
Comment removed (Score:5, Interesting)
Re: (Score:2)
So there was a bug in OpenSSL. Big bug, yes, but that's not the reason it was (and still is!) a big problem.
The genesis of the big problem is one of monoculture, not only of OpenSSL being the dominant SSL implementation, but probably more importantly, the fact that pretty much all Internet security that is accessible and matters to ordinary users is SSL/TLS in the first place.
If you think this is bad, imagine what happens if the fundamantals of SSL itself are compromised: What would we replace it with? Ho
The bug was found because it was open source.. (Score:5, Informative)
The problem was found because the code was Open Source. If it had been closed source, then the bug would still be secret. To the extent to which the bug was recognized (or commissioned) and exploited by the likes of the NSA, it would have probably remained secret for a lot longer.
According to Microsoft's EULA, for example, finding -- much less fixing -- such a bug is illegal. If the NSA had paid them to put such a bug into the Windows version of SSL, then it would probably remain unpatched for years after someone had pointed it out to them as an exploitable bug.,, and anybody openly reporting such a bug, even after 6 months of trying to get MS to fix it, would be roundly criticized for disclosing the bug 'prematurely'.
Even then, it would probably not be fixed by Microsoft until at least the next monthly bug release cycle (or even the one after that.
With the code being Open Source, the problem got fixed faster than yesterday. Period. If the OpenSSL people refused to fix it, then it would have been forked. ... and more to the point: Such a security-centric fork would have been legal.
SChannel (Score:3)
Most of the non-OpenSSL instances of TLS implementations out there are probably SChannel.
I would be shocked if Microsoft hadn't had equally severe bugs, and further surprised if they could fix them as fast.
Comment removed (Score:4, Informative)
it IS safer (Score:2)
What if this was not 'OpenSSL' but instead it was some form of 'ClosedSSL' library that had this problem in it?
NSA would still have access to THAT code, you can bet your ass they would, they wouldn't leave a project like that alone. However nobody else would know (unless stumbling upon it by chance or being able to access the source OR if some insider SOLD that information to somebody on the outside and now you'd have a vulnerability that is exploited by the gov't and by shadiest of the organisations/peopl
bugs are not the issue (Score:2)
Better documentation of source code is needed (Score:3)
I do believe open source is safer as it does absolutely allow for independant party review, which is how this bug was found. Because outside parties had access to OpenSSL they were able to find the problem, whereas with closed source software it might have never been found, or found but hushed up by the company. Proprietary software has just as many bugs as open source, if not more, the difference is there is less accountability.
That being said, the full potential of open source software in independant party review is not brought to its full potential but the fact that a lot of open source software is poorly documented as to the internal construction of the code. This ends up wasting time for programmers to basically have to spend more time than it should to learn the internals, and even wastes time of those running the project basically repeating explanations of the code whereas if they were to make some documentation people could get many more answers without having to bother the project leads. It makes the learning curve much steeper that when dealing with software that has a lot of code, to not have any documentation on how that code fits together. On one hand, we say that open source allows people to review the code, but just opening the source alone does not make it easy as possible for this to happen, the code needs internals documentation or else it often will take simply too much time for people on the outside for people to penetrate it. Many open source software projects end up with a cliche who understands the internals of the software because they wrote it, but its difficult for those on the outside to penetrate. Even for an expert programmer, being able to access documentation speeds up the time to become familiar with the code immensely.
Not doing code documentation is a poor practice and open source developers should document what they are doing for others and as well to save time by preventing having to explain things over and over again to newcomers.
Open Source (Score:2)
It's BECAUSE of open source we even learned about Heartbleed. If it was closed source the hole would still exist hidden in the shadows.
Could bad guys be staring at git feeds? (Score:2)
If I'm a malicious hacker, or the NSA, but I repeat myself....
I'd be now (if i wasn't before) checking the feeds for gnutls, nss,, and openssl, hoping to catch he bug before anyone else, so i can exploit it.
That said, I'd also be checking out the best decompilers to see if that helps me find bugs in closed source code. Im sure people have looked online for Windows source code to see if there are any ways to exploit it. In this case, a small group of hackers would have the code, and would necessarily want to
"Many eyes" discovered quickly, didn't disclose (Score:2)
Data point: the NSA reportedly discovered this bug within days of its placement, and didn't disclose it.
When the bad guys have a lot more eyes than the good guys, it skews the math.
Heartbleed disclosure timeline .. (Score:2)
Company | Codenomicon: "Howard A. Schmidt, Chairman of the Board
It's not a discussion... (Score:2)
It's a statement. It's a statement by a dogmatist on one side, and there will be statements by dogmatists on the other side. Two dogmatists don't have discussions--they just try to shout one another down.
Yeah, if you get enough eyeballs on a problem, sure it might be easier to solve. But users != eyeballs. I suppose being open source, it is easier to get eyeballs on something, but it is also easier for the black hats to get eyeballs on something as well and exploit it.
In the end, neither side in the dog
Given the number of Windows exploits... (Score:2)
I would take my chances with FOSS. How crazy is the statement that XP can not be safely used without Microsoft support, given that they had 13 years to fix bugs in a feature-frozen release? In an open source release used for so long and on the same scale, chances of finding a new catastrophic bug would be slim. For example, Heartbleed was found in 3 years. Likewise goto fail bug in Apple open source was discovered in a relatively short time.
Not to mention that if new bugs were found in desupported but still
safe languages (Score:2)
Heartbleed is a perfect example of why software should be written in "safe" languages, which can protect against buffer overruns, rather than unsafe languages like C and C++.
Of course, the problem is that if you try to distribute open source software written in a safe language, everyone bitches and whines about how they don't have a compiler for that language, and how run time checking slows the software down by 10%. Personally I'd rather have more reliable software that ran 10% slower, than less reliab
It doesn't. (Score:5, Insightful)
1. Proprietary software could have a million bugs like this. You just wouldn't know it. They do not become less dangerous because they are proprietary, nor do security flaws become more dangerous because they are in open-source code.
2. Open-source software at least has the possibility of being looked at over and over. Proprietary code may be reviewed or not depending on the resources, interest, and monetization capability of that code. A possible review by all relevant coders in the world is always more review than by a limited team of programmers and analysts at one company.
3. The real problem with Heartbleed is the time that passed between code being written and a bug being discovered. That delay exacerbates the security problem. However, there will be some sort of statistical (probably Poissonian or normal) distribution of the time required to catch a bug since introduction into code. As with anything, there are outliers. Heartbleed with its serious and longstanding flaw must be considered an outlier unless shown otherwise. I have not seen evidence that this happens on a regular basis with any software, FOSS or otherwise.
I would appreciate it if future Slashdot discussions were let out through the upper orifice with some maturation period in the brain, rather than through the lower orifice after festering in the colon.
It confirms that open source is safer. (Score:3)
In a closed source world we would have everybody vulnerable without anyone knowing about it. That only helps if you're one of the people abusing it, because nobody is taking precautions against it. Now we are actually able to respond to a real threat that we can explore deeply. Sorry, closed source is not going to give me confidence.
Ok, ok, I bite (Score:3)
After a lot of soul searching whether or not I should actually honor this obvious attempt at trolling with a comment, I think I should, lest someone actually take it serious and believe it.
Allow me to take you on an excursion into the world of security. Before you get your hopes up, it's not as glamorous or kinda-sorta-shady-sinister-blackhat as you might think. But I'll try to make it as interesting as it can be.
Part of security are audits. Audits are, in a nutshell, attempts to find out whether there are weaknesses in the surface you're auditing. For example, you prod at a server, check its ports, make sure that everything that answers does so in a way that cannot be exploited, and so on.
Those that at least dabbled in security will know about the various "boxes" used to describe the "rules of engagement" in such an audit. Most commonly known, I'd guess, are "black" and "white" box tests. In a "black box" test, you get no or very little information about your target and your task is to find out whatever you can find about it. A "white box" test is the exact opposite, where you get full disclosure of your target's makeup, e.g. what services are running, at what patch level, often even what purpose they serve and what department they belong to, and so on.
One might now think that the more "normal", more "useful" test is a black box test. Because, hey, if I tell you everything, what the hell would you test? But, know what? A black box tests is something that you'd do to test the tester's ability, not that of your target. With a black box test you can rather find out just how much the guy you hired to do your audit actually knows about the whole shit.
If you actually want to test the target, you disclose about any information there is. That might sound odd now, but when you think about it, it starts to make a lot of sense. This information can be available to a potential attacker. A disgruntled ex-employee could have that information. Or someone who spends a lot of time social engineering and prodding can gain it somehow. Assuming that you could increase your security by withholding information from a potential attacker is at best giving you a false sense of security because you can NEVER actually say with at least a semblance of certainty that a potential attacker CAN NOT have that information. Like I said before, all it takes is a pissed off ex admin and this attacker would have ALL the information.
And it's rather trivial to sell information these days...
Now, what does this have to do with the question open vs. closed source?
It means that just because YOU do not have the information does not mean that your attacker does not have it. Closed source is akin to the black box in the aforementioned example, open source the white box. When you audit closed source, you will learn more about the abilities of your auditor rather than about the security level of the software you audit.
Heartbleed is Good for Opensource (Score:4, Insightful)
It's not about open or closed source IMHO (Score:4, Interesting)
Open Source Heartbleed (Score:5, Insightful)
Fixed within, 24 hours on 187 servers running open source openssl libraries, f and earlier versions.
I still do not have fixes for about 5 proprietary customer products, and there has been no word from 3 of them if they intend to fix them.
I have informed my customers that they should consider moving from the proprietary products IF they have the cash to do so.
I really do not see your point in asking the question.
You cannot design and build secure software to begin with.
You need to have the source code for the forseeable future now because of the world we live in.
Very very bad people are coming out of the pit and they want your infrastructure, your data and your intellectual property.
But above all, they want control of you.
Open Source can prevent a world like that from taking hold, but it cannot save a fool from his foolishness.
Re:Open source was never safer (Score:5, Insightful)
I don't know, Microsoft got caught about being able to waltz through the password check with full spaces, which is slightly worse than forgetting to place a character limit back onto something. Admittedly the stakes are not the same, but you can check it, and enough do that it works.
It's safer in terms of checking for back doors, sloppy coding anyone can do.
Re:Open source was never safer (Score:4, Interesting)
Closed source is not inherently safer. Raymond's proposition is theoretically sound, however in actual practice, the NSA has "many eyes"...
Re:Open source was never safer (Score:4, Insightful)
The NSA is why my hair has fallen out and my gut has gotten big. They're also behind the big mudslide in Washington. In fact, they are the boogeyman for EVERYTHING!
God you people get annoying.
Why is Raymond's claim theoretically sound? (Score:5, Interesting)
Raymond's proposition is theoretically sound
No, it isn't. It's nonsense and it always has been.
There is plenty of evidence for the effectiveness of good code reviews, but most of it shows rapidly diminishing returns with the number of reviewers. You get much of the benefit from having even one or two additional people read over something. By the time you've had more than four or five people take a look, the difference in effectiveness from adding more barely even registers, unless one of the additional reviewers has some sort of unique perspective or expertise that makes them not like the others.
Given that almost every major FOSS system software project has had its share of security bugs, there is really very little evidence to support Raymond's claim at all. It's not like it has ever been taken seriously outside the FOSS fan club, but there are a lot of FOSS fans on Slashdot, and so plenty of comments (and positive moderations) reinforce the groupthink as though it's some inherent truth.
Re:Why is Raymond's claim theoretically sound? (Score:5, Insightful)
There is plenty of evidence for the effectiveness of good code reviews, but most of it shows rapidly diminishing returns with the number of reviewers.
To me this is an argument *for* open source software. It *takes* LOTS of eyes to catch bugs, *because* there is diminishing returns by adding more code reviewers. It is only by having hundred or thousands of them that you can hope to catch those ones that would otherwise go unnoticed.
By the time you've had more than four or five people take a look, the difference in effectiveness from adding more barely even registers, unless one of the additional reviewers has some sort of unique perspective or expertise that makes them not like the others.
And one easy way to have a diverse group of code reviewers is to have a lot of them.
Given that almost every major FOSS system software project has had its share of security bugs, there is really very little evidence to support Raymond's claim at all.
Every piece of software of any reasonable size has security bugs. The fact that we know about them is because someone found them, which is exactly what is supposed to happen.
Re: (Score:3)
But how many FOSS projects really have diligent review of all their code by anything like that many people? For many projects, getting a change accepted requires only the approval of one or two others. Activities like the current detailed review of TrueCrypt are the exception, not the rule.
A lot of the bugs are caught well after the code is accepted. People sometimes just randomly spot things. The probability is low, but over enough time and with enough eyeballs, you catch bugs this way.
I was trying to hunt down a bug in my own code and ended up catching a bug in motif once. This was only possible because the source was open. I don't think this is such a rare occurrence. Even if 1 in 10 programmers spots 1 bug in open source software in their life, that's like hundreds of thousands if no
Re: (Score:3)
You should consider avoiding pretty much any OS kernel then for the same reasons!
Re: (Score:3)
Safer != Perfect
Open Source is not perfect. It also does not help when you have large commercial institutions RELYING on the source code in a security critical role under constant attack by well-funded adversaries, AND the developers of said open source code are so pitifully underfunded [marc.info], AND the commercial proprietors that cause said open source library to become a high-value target are only willing to invest in features [marc.info], and not improvements that would lead to better quality and lesser likelihood o
Re: (Score:2)
Re:Open source was never safer (Score:5, Insightful)
Closed source was always safer.
One word for you: Microsoft. Maybe two: Adobe.
Re:Open source was never safer (Score:5, Interesting)
Closed source is hazardous in many ways. Along with being more frequently targeted, the NSA revelations showed that Microsoft worked with the NSA when deciding how quickly to close some holes. Another hazard is the threat of being attacked and/or sued by companies whose products were found to have problems.
No question the heartbleed thing is a huge and embarassing problem. But you know? It's actually kind of hard to count the number of high-profile vulnerabilities in F/OSS software as not a whole lot come to mind. On the other hand, the list is enormous for closed source from large companies... also hard to count but for another reason.
It does highlight one important thing about F/OSS, though. Just because a project has enjoyed a long, stable and wide deployment, code auditing and other security practices are pretty important and just because it's a very mature project doesn't mean something hasn't been there a long time and had simply gone unnoticed for a long, long time. People need wakeup calls from time to time and F/OSS developers can be among the worst when it comes to their attitudes about their territories and kingdoms. (I can't ever pass up the opportunity to complain about GIMP and GNOME... jackasses, the lot of them.)
Re: (Score:3)
Closed source was always safer.
One word for you: Microsoft. Maybe two: Adobe.
THIS! It's funny how Microsoft has all the issues that they do, and yet when a problem shows up in anything else, the fanbois instantly ejaculate LOOK!! SEE???
Sorry kids, Windows has a many year legacy of needing constant security updates, way too many for you to be braying about this, as proof of the bankruptcy of FOSS.We get it, But Redmond products have a lead that will never be equaled.
Re:Open source was never safer (Score:4, Funny)
2003 called, they want your Microsoft back.
If only we could.
Re:That's kind of curious (Score:4, Interesting)
I think the grandparent was right. MS now is hugely better than the MS of 10-15 years ago. I'm not going to try and objectively prove that as I don't care enough about MS and probably couldn't anyway.
But the NT4 to XP/2003 era was appalling security wise - but they changed that. IIS went from swiss cheese to one of the tougher web servers to break. You just don't hear any more about the kinds of problems they used to have. If you endured those days or just laughed from the sidelines, you don't need any hard data to see that they have improved a lot.
I found this paper [openbsd.org] from Theo de Raadt illuminating though. He steps through 10+ years of OS hardening techniques OpenBSD has put in place to prevent badly written applications misbehaving. Towards the end he summarises how other platforms do this stuff - the only other platform that did it all by default was Windows (yikes!).
Re:Open source was never safer (Score:5, Insightful)
Only if one buys that "security through obscurity" is a legitimate form of network safety. A decade's worth of Internet Explorer and ActiveX vulnerabilities would suggest you're wrong.
Re:Open source was never safer (Score:5, Informative)
Encryption is meant to make the original text be obscure, however the means of encryption should not remain obscure. What "security through obscurity" refers to is the common and naive practice of assuming that no one will guess your security methods, and the problem is that people do find this stuff out. Ie, assuming that no one will guess your backdoor debugger password. Now it is fine to start with a strong set of security practices and then only after that is in place it can be made more obscure. But usually when something is made obscure it is because the security is really weak in the first place.
As for ActiveX, the problem was not that the end user would go and hunt down a trusted plug in and install it, but that it relied upon the web to tell you if something was trusted and then automatically install it (and for the average user this happened even without their knowledge). This was done at the same time that Java was promoted as an alternative, a system that was intended to be designed for security by sandboxing the code (though of course it had flaws) as well as being cross platform, whereas ActiveX was all about taking plain x86 code and executing it as long as it was signed.
The real problem with ActiveX was the idiotic idea from Microsoft that it should be installed automatically without bothering the users with annoying questions such as asking for permission first; they did the same boneheaded move by allowing executables in emails to be executed without a confirmation. It wasn't until they started added UAC that it seemed they understood what the problem was.
Re: (Score:2)
The huge problem with OSS is that if no one takes the responsibility to do a good code audit for a project, the NSA will do that independently, file the found exploits, and tell nobody.
Of course, the flip side is that if you *want* to do a good code audit for software you're using, you can do it on your own with open source software (and you can review code changes in patches before applying them). However, with closed source software, you can (usually) only take the word of the closed source company and have to trust that they haven't purposely inserted back doors into the code.
And once one company does the audit, they can share it with others (or a group of companies could share the c