Serious Network Function Vulnerability Found In Glibc 211
An anonymous reader writes: A very serious security problem has been found and patched in the GNU C Library (Glibc). A heap-based buffer overflow was found in __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() function calls. A remote attacker able to make an application call to either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the program. The vulnerability is easy to trigger as gethostbyname() can be called remotely for applications that do any kind of DNS resolving within the code. Qualys, who discovered the vulnerability (nicknamed "Ghost") during a code audit, wrote a mailing list entry with more details, including in-depth analysis and exploit vectors.
Switch back to original Linux libc? (Score:5, Funny)
The libc -> glibc switch was so much fun, that I think we should do it again in reverse!
From TFA (Score:5, Informative)
So it's actually already been fixed. All that's needed here is for some distributions to push the fix out.
Re: (Score:3)
So it's actually already been fixed. All that's needed here is for some distributions to push the fix out.
But .. but now it has a CVE number and everything - so it must be scary
Re:From TFA (Score:5, Informative)
But .. but now it has a CVE number and everything - so it must be scary
Written by somebody who clearly neither manages a large amount of hosts exposed to the Internet nor manages multiple environments in which there are some new hosts that are luckily patched along side other older hosts that have to run *slightly* older releases of distro's for one reason or another.
This IS a big deal.
Re: (Score:2)
Ubuntu has a newer LTS release, 14.04, which uses glibc 2.19.
Unfortunately, the rest of them all use older GLIBC versions in their current stable, even the ones whose current stable was released in the middle of last year.
Re:From TFA (Score:4, Informative)
I just installed glibc updates for Debian, so I presume the fix has been pushed.
Re: (Score:3)
At most sizeof(char *) bytes can be overwritten (ie, 4 bytes on 32-bit machines, and 8 bytes on 64-bit machines). Bytes can be overwritten only with digits ('0'...'9'), dots ('.'), and a terminating null character ('\0').
With only being able to overwrite 4 bytes max, you would think not much could be done, and indeed, mostly they were only able to make things crash. Most servers don't do dns lookup of a remotely supplied address, but mail-servers can, to verify the sender is correct.
Astonishingly, even without being able to write assembly shell-code, they were able to get the Exim mail server to execute arbitrary remote commands. That is
Re: (Score:2)
Don't they now? Web, ssh, FTP, IRC, plenty of servers call gethostby functions as part of standard operation.
You read incorrectly, I'll quote it again: "Most servers don't do a dns lookup of a remotely supplied address.
One of the examples they used was ping. Of course ping does a DNS lookup of the address supplied by the user, but unless you have inetd in a really weird configuration it won't be started remotely. If ping crashes, or even executes arbitrary commands because of a specially crafted command-line, it's not a security vulnerability.
Re: (Score:2)
If ping crashes, or even executes arbitrary commands because of a specially crafted command-line, it's not a security vulnerability.
That's a pretty sweeping statement to make. Most interesting security vulnerabilities (IMO) are the results of multiple smaller issues and/or design decisions that can be chained together.
For example, a lot (most?) of the Linux distributions I see have ping's SUID bit set, and it is owned by root. So, yes, ping executing arbitrary commands absolutely *can* be a security vulnera
Re: (Score:3)
Re:From TFA (Score:4, Informative)
1) A (more or less) specific sequence of function calls. Merely calling gethostby* itself won't do it.
2) The eventual call to gethostby on a string supplied by a hostile user.
3) Have another buffer hostile users to fill (not overflow).
4) A weakness of your program structure that allows four bytes to reference the other buffer.
5) Include a service that runs things on the command line.
Exim allowed all of those. You might be able to get away without number 5 present, but the program would need some other weakness to make it exploitable.
Not all code is vulnerable - getaddrinfo() is fine (Score:5, Informative)
The affected call is gethostbyname() and friends, which have been deprecated by the more protocol-transparent getaddrinfo()/getnameinfo() set of APIs. If you use IPv6, getaddrinfo() is the only way (gethostbyname() and friends are AF_INET (IPv4) functions only), but they're protocol transparent ways to do DNS lookups (they can return AF_INET, AF_INET6 and any other valid address supported by the system and DNS).
Deep down, if you look closely, they mention that code using getaddrinfo() is not vulnerable to the bug.
Shortly after learning about getaddrinfo() I stuck to using it - far easier to use than gethostbyname() and less messy in the end. The only complication is having to call freeaddrinfo() when you're done.
Re: (Score:3)
However, it's not like gethostbyname() is a rare call. I suspect that well over 99% of net-aware applications are still using it. This affects just about everything that's talking over the internet.
Re: (Score:3)
True, but gethostbyname() is ancient and if the program wants to support IPv6, you can't use gethostbyname(). So I think the number of programs actually vulnerable is far lower. Remember, gethostbyname() only works with AF_INET - while getaddrinfo() works with AF_INET, AF_INET6 and any other protocol that uses so
Re: (Score:3)
As pointed out in the article, the program must use gethostbyname() on a name supplied by the attacker.
A much more mitigating factor is that the bug is only exercised if the name looks like a numerical id, and according to their search most software first checks this using inet_aton() and only calls gethostbyname() if this fails, thus avoiding the bug.
Think you're immune from attacks? (Score:5, Funny)
Don't be so glib, see?
I'll be here all night folks. Tip your servers. Make sure they're bolted in, though.
Re:Think you're immune from attacks? (Score:5, Funny)
Don't be so glib, see?
I'll be here all night folks. Tip your servers. Make sure they're bolted in, though.
Don't blow your stack if nobody applauds. It's just that we're overflowing with bad puns, and the funny bits get flipped around, and in the end all we see is some stupid zero on the stage who's only in it for the cache anyway.
Re: (Score:2)
and the funny bits get flipped around
chmod 101 /dev/joke
Accidental bugs? (Score:1, Interesting)
I don't think so. There are just too many of these buffer overflow security holes in commercial and open source software to be a chance. In over two decades of programming in every language and platform that came along, from Z8, x86 assembly, then FORTRAN, C, Pascal,... Java, Javascript, Python, awk, objective C, ... from embedded coding on routers, switches and misc. controllers, to numerous versions of MS DOS, Windows, Linux and iPhones, I have yet to have one such buffer overflow bug in my code. It's th
Re: (Score:3)
There must be agencies seeding these projects, commercial and open source, with toxic contributors injected there to deliberately contaminate the code with such bugs. The further fact that one never sees responsible persons identified, removed and blacklisted suggests that contamination is top down.
Or, you are yourself a toxic seed planted by The Man in order to foment FUD and make good people not want to be part of these projects. Or something like that. Give it a rest with the absurd conspiracy crap.
Re: Accidental bugs? (Score:3)
Never attribute to malice that which is adequately explained by stupidity.
There are too many stupid people in the world. Stupidity isn't necessarily a permanent thing: it could be caused by time constraints, personal problems or even lack of/too much caffeine. Or just because one doesn't think of the problem from start to finish.
But then this could be what "they" want you to believe :)
Re: (Score:1)
" I have yet to have one such buffer overflow bug in my code."
Can't decide: Satirical troll post or somebody that's actually that egomaniacal (and in need of a pink slip if his employer is rational).
Re: (Score:2)
Re: (Score:2)
I have yet to have one such buffer overflow bug in my code.
That you know of. Besides, I'm sure you've had many that you've caught during the standard code -> compile -> run -> segfault -> debug cycle, but the more subtle ones are harder to trigger.
It's the most basic rule to check for buffer boundaries that even beginner programmer learns it quickly.
Depending on what the code is doing and what kind of legacy cruft you're dealing with it's not always trivial.
There must be agencies seeding these projects, commercial and open source, with toxic contributors injected there to deliberately contaminate the code with such bugs. The further fact that one never sees responsible persons identified, removed and blacklisted suggests that contamination is top down.
More likely the other devs feel like it's bad form to drag the names of past contributors through the mud in public. Particularly when the reviewers missed the bug as well.
Re: (Score:2)
I have yet to have one such buffer overflow bug in my code
Yea, right. You know the authors of this function probably thought that too. They had no counter examples until now, just like you and your code.
Re: (Score:2)
Of course my code, like any other, has bugs. For example, most of the time if I add several pages of new code, it won't even compile, let alone run correctly right away. I just never have buffer overflows. It's somehow ingrained to check for boundaries (either on the run, or when efficiency matters by algorithmic certainty) as the algorithm advances through the data. Among my bugs, mostly it is a wrong variable being used, or used before it was initialized as expected or after it has 'expired', or variable/
Many mitigating factors, not THAT dangerous (Score:2)
The article lists a long string of mitigating factors tat make it not as dangerous as it might first appear. Someone else already mentioned that it doesn't effect applications that are IPv6-ready; both IPv4 and IPv6 addresses are resolved with the same (safe) function in most software that's IPv6-capable.
Also, at 4 bytes can be overwritten on 32-bit, 8 bytes 64-bit, and they can only be overwritten with ascii digits 0-9, dots, and must have a terminating null. (So really three bytes on 32 bit, 7 bytes on 6
somewhat true. (Score:2)
That's somewhat true, their exploit will show how to exploit EXIM if that's installed and open via the firewall on vulnerable systems. Most other software using glibc isn't vulnerable for the reason I listed above.
Also,with 3-7 bytes they can overwrite, less than the size of a pointer, their exploit may be very, very limited. Specifically, it can't specify a memory address to jump to where the main exploit is found, because there's not enough room for a pointer to a memory location. We'll see what they com
good point, but ONLY pointer (Score:2)
That's true, you could point to certain addresses. You're just left with no room to jump to that address or do anything else with it - the pointer takes up all of the space.
Also of course the other bytes of the pointer are restricted to eleven values each. I have no doubt they'll be some exploit, it's just likely to be rather limited.
Awwwww crap (Score:2)
This has me more concerned than some of the other recent bugs, primarily because it's so easy to exploit by script kiddies.
Plus, there are huge, vast, barely conceivable numbers of network-attached embedded devices that use the gethostbyname() call. What percentage of these are remotely update-able? What percentage of these will have their firmware re-flashed?
This one seems like it gives black-hats the ideal way to get a swarm army of (relatively) weak and/or dumb devices. Yet even these weak, dumb devices
Re: (Score:2)
Why not strncpy or strlcpy (Score:2)
resbuf->h_name = strcpy (hostname, name);
In my (admittedly limited) recent c coding I've used strlcpy rather than strcpy. Is there a good reason not to use this (or strncpy) in place of strcpy?
resbuf->h_name = strlcpy (hostname, name, sizeof(hostname));
(OK, you may get an error due to a truncated copy, but isn't that is easier to spot and deal with than a potential buffer overflow?)
Re: (Score:2)
Actually, the most logical choice would have been to used mempcy because they already called strlen(name) earlier.
Basically, they are doing the following:
- compute the size required to store 2 pointers (THAT SHOULD BE 3) followed by strlen(name)+1 characters
- check that the buffer is large enough and reallocate it if needed
- store 3 pointers followed by name in the buffer.
In that case, the obvious value for the additionnal size argument passed to memcpy, strlcpy or strncpy is strlen(
Re: (Score:2)
The stability difference was like night and day.
Re: (Score:2)
strncpy will not overflow the buffer provided you pass the size of the buffer (if you don't pass the size of the buffer, *none* of the safer functions are going to help). It's problem is that it will not write a nul at the end of the buffer, thus reading will read right off the end. It also wastes a huge amount of time filling the unused part of the buffer with nul.
strlcpy is far, far better and does pretty much what is wanted.
However in this case they really did try to figure out if the buffer would overfl
Raspbian vulnerable (Score:5, Informative)
According to directions side-thread, glibc versions prior to 2.19 are vulnerable. Checking my machines, Slackware-current and Lubuntu-14.10 are fine. Only my poor tiny Raspberry Pis are vulnerable (2.13). But they run slowly enough I can watch the gethostbyname() lookups myself :)
Re: (Score:2)
this IS a bummer. upgrading libc is not usually an easy and simple thing; its too central and everything touches it.
so, this means that the pi is not really safe to be on a public net, now, is it?
wonder when raspian will upgrade this pkg. hope they do it soon.
Read further (Score:2)
In particular, we discovered that it was fixed on May 21, 2013 (between the releases of glibc-2.17 and glibc-2.18).
Unfortunately, it was not recognized as a security threat; as a result, most stable and long-term-support distributions were left exposed (and still are): Debian 7 (wheezy), Red Hat Enterprise Linux 6 & 7, CentOS 6 & 7, Ubuntu 12.04, for example.
Re: (Score:2)
Is it Serious ovrflw problem or potential problem? (Score:2)
A very serious security problem has been found and patched in the GNU C Library (Glibc). A heap-based buffer overflow was found in __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() function calls.
In all legal ways to use the function, recognizing PATH_MAX == 256, is there a problem? So, it is a potential problem.
So, some library code was found that does not check for potential overrun. By broadcasting the routine name, hackers or ganifs will attempt to break i
Re:Open source code is open for everyone (Score:5, Insightful)
I don't get it. Proprietary software has all sorts of serious vulnerabilities. Why is it that when a vulnerability is found in FOSS, you people all come out and mock it while ignoring all the incompetence of proprietary software?
FOSS *is* more secure, and that's true even with the occasional vulnerability. You're extremely illogical to point to some vulnerabilities and conclude that it isn't more secure. How many vulnerabilities are not known about because no one can look at the source code?
Re:Open source code is open for everyone (Score:4, Funny)
I don't get it......Why is it that when a vulnerability is found in FOSS, you people all come out and mock it while ignoring all the incompetence of proprietary software?
I see that this is your first visit to Slashdot. Welcome!
Heartbleed (Score:4, Insightful)
How many years was Heartbleed around before anyone noticed? Apparently "many eyes" were not reading that bit of code.
Re:Heartbleed (Score:4, Insightful)
How many years have various bugs been in proprietary software that no one has noticed (and most don't have a chance to notice)? This is just illogical thinking.
Yes, we get it. Software is made by humans. Mistakes will be made, whether it's free/open source or not. The point is, FOSS provides more security by allowing more eyes to see the code and the ability to get anyone to publicly audit the code. Sometimes big vulnerabilities won't be discovered for a long time, but that applies even more to proprietary software; don't forget that.
Re: (Score:2, Insightful)
Even fewer people do it for free.
No one has any evidence on that, one way or the other.
What we do know is, it's illegal to fix proprietary software.
Re: (Score:1)
No, it's illegal to disseminate the source code of proprietary software. Issuing a binary patch to be applied to an existing binary is 100% legal. Difficult, but legal.
If you can find a bug in proprietary software using just a debugger, and can determine why that bug occurs and provide a binary patch, you're 1) a damned good programmer and 2) not breaking the law.
Agreed on point 1. On point 2, you could easily be stumbling into a legal minefield. I'd hope for a finding of fair use, but that is decided on a case-by-case basis. I doubt the courts would distinguish between publishing a derived work and publishing detailed instructions for transforming a work ("a binary patch"), since the end result is the same.
Re: Heartbleed (Score:2)
You are wrong. It is illegal to fix most proprietary software yourself. The EULA for most closed softwats, including all Microsoft's propritary software, prohibits all reverse engineering by end users. You could issue a binary patch if you wanted perhaps, but creating that patch would violate license agreements and be illegal under copyright law.
These long standing flaws in free software are not there because of the development model, they were eventually found because of it--people eventually looked hard e
Re: Heartbleed (Score:2)
Re: (Score:2)
First, there have been different court rulings and laws affecting EULAs, so while your "usually unenforceable" may be true, you can't rely on it.
Second, it's real easy to get into trouble in an unclear legal situation, and winning the suit against you may be almost as bad as losing. (The only winning move is not to play.) Many people aren't going to risk that.
Re: (Score:2)
And in large parts of the world that is an unfair contract term and unenforceable. It is also in large parts of the world a protected right in law.
Re: (Score:2)
That's great in theory, but no one likes reading lines and lines of old code looking for a potential error. Know what? Even fewer people do it for free. At least in proprietary software, people are paid to do it.
In proprietary software and free alike many people are paid to do it.
They do it so their masters can then exploit the bugs they find and pry into your private life, hack your bank accounts and generally fuck with you.
Re: (Score:3)
At least in proprietary software, people are paid to do it.
If you really believe that is true, I suggest you provide a list of all the companies advertising on Dice [dice.com] looking for "source code vulnerability auditors." Can't find any? That's because companies pushing out commercial software don't give a crap. It's hard enough just getting the get-the-features-out-focused managers to get why you're spending time writing tests, much less doing code reviews to look for vulnerabilities. I've even heard them say things like "It's not necessary because I found this free
Shallow bug doesn't mean non-existent. Fix obvious (Score:5, Insightful)
In case you're unaware, "bugs are shallow" doesn't mean they don't exist.
ESRs complete sentence is:
"given enough eyeballs, all bugs are shallow; or more formally: Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone."
In other words, someone will quickly quote Adam Savage and say "THERE'S your problem!". :)
The difference between a deep bug and a shallow bug is is what happens after you notice a problem. A shallow bug is right there, at the surface. Function foo() is supposed to return x, but instead it returns x -1, and there is the line of the code that's the problem.
A deep bug is one where you look at function foo(), which creates an instance of class Bar, which is subclassed from IEParser, which calls friend class HTML4Lexer, which has function TagAtrribute() - but TagAtrribute() returns the correct value, so how the heck is it wrong in Bar? Then when you found out WHY it's wrong, you can't come up with any way of fixing it without rewriting the HTML specification.
Heartbleed is actually a great example. Many people looked at it right away and within an hour or so there was a patch available. Those may people discussed the three or four proposed long-term solutions and in about 24 hours we agreed on that Florian's solution was best. Florian was one of the many eyes, and the bug was shallow to him - "he fix will be obvious to someone", and that someone was Florian.
look up straw man. YOURS is the straw man (Score:2)
A straw man is a position invented during an argument in order to strike it down. If I pretended Obama had proposed executing anyone who gets a promotion, then shown why that proposal is bad, that would be a strawman.
Pointing out what the original claim was isn't a straw man, that's the opposite of a straw man. Let me give you two more examples of straw man:
Bob: The water is shallow.
Sally: You're wrong, I can prove the water is wet!
Bob: The bug is shallow.
Sally: You're wrong, I can prove the bug exists!
Re:Heartbleed (Score:5, Insightful)
Apparently "many eyes" were not reading that bit of code.
Will you please actually read the quote rather than quoting an inorrect interpretation. The quote is:
"given enough eyeballs, all bugs are shallow"
It means that once a bug is found, it is shallow, i.e. quick and easy to solve for someone. It doesn't and never did mean that all bugs will be found.
Re: (Score:3, Insightful)
Will you please actually read the quote rather than quoting an inorrect interpretation. The quote is:
"given enough eyeballs, all bugs are shallow"
It means that once a bug is found, it is shallow, i.e. quick and easy to solve for someone. It doesn't and never did mean that all bugs will be found.
Actually, it's unfortunate, but I think he did mean that:
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.
That's his longer version of the same slogan - literally the next sentence in the essay.
It's possible to read that as meaning that every problem —once it's been found— will be fixed quickly and relatively easily, but Occam's razor says that we should understand discovery of the problem to be implicit in this statement.
But... you are right to say that FOSS is far better at fixing known bugs than proprietary software. By the late '90s, I wa
Re: (Score:2)
Re: (Score:2)
Keep modding that troll up!
Re: (Score:2)
Apparently "many eyes" were not reading that bit of code.
Will you please actually read the quote rather than quoting an inorrect interpretation. The quote is:
"given enough eyeballs, all bugs are shallow"
It means that once a bug is found, it is shallow, i.e. quick and easy to solve for someone. It doesn't and never did mean that all bugs will be found.
'many eyes' weren't reading that bit of code.
But you can fucking bet that 5 eyes were reading that bit of code!
Re: (Score:2)
How many years was Heartbleed around before anyone noticed? Apparently "many eyes" were not reading that bit of code.
Even you admit heartbleed *WAS* around (not *IS* around) and thus was found and fixed.
Clearly at least two eyes reviewed the code, found the bug, and it is now fixed as a result.
That is two more eyes than is searching through closed source code.
Two is still greater than zero so it is still a net positive.
Re: (Score:2)
Yeah, like anyone can actually read and understand OpenSSL. Perhaps it would get studied more if someone entered it into next year's IOCCC.
Re: (Score:2)
If eyes didn't see it, it actually tells us nothing about the theory of what happens when many eyes do look at it.
The theory was never, "if you open source it everybody will read it." The theory was, things that are closed source can't be read by the public, and things that are open source are sometimes read. When they are read by more people, there are more chances to see the bugs.
If you didn't know even the basics of what you were talking about... why talk?
Re: (Score:2)
This one has been around since 2000. 15 years, just getting around to auditing the main C library used by EVERYTHING now, I suppose...
It's a horseshit argument, and always has been. Just because someone CAN audit something, doesn't mean they DO. Or that they are competently doing it if they do.
Re: Open source code is open for everyone (Score:2, Insightful)
"You people"?
Gee a little insecure are we? News flash software is software and has bugs. It doesn't matter which license it is under. It still is software and no being from Microsoft doesn't make it insecure by default anymore than being GNU makes it more secure.
Yes Apple, Google, and Microsoft are mentioned here when a serious flaw is discovered. Why should Linux or anything GNU get a free pass?
Re: (Score:2, Insightful)
It's because we've put up with 30 years of FOSS community trumpeting the fact that Linux isn't hacked, only Microsoft needs anti-virus, Open Source means that "something like this will never happen".
If a community spends decades puffing its chest and talking shit it's going to get 10x the scorn when it's revealed to be as vulnerable as the next guy. The fact is that all software can be hacked. And at any given time there is a zero day exploit that can probably penetrate any system. Commercial tools used
And here's the patch (Score:2)
void *strcpy() { printf("Don't use strcpy, idiot! We told you that years ago!\n"}; exit(-37); }
void *strcmp() { printf("Don't use strcmp either, idiot! We told you that years ago too!\n"}; exit(-37); }
Also, according to the articles I've read about this, the somewhat more official patch came out in 2013, but wasn't marked as a "security" patch so it only made it into the newer OS versions, but wasn't retrofitted into the older ones. So it'd be fixed in Ubuntu 14.04, but not in the 12.04 LTS version.
Re: (Score:2)
Yeah, I probably should have blamed a different one of the non-length-limited strXXXX() functions, but strcmp() will still do Bad Things if you hand it one or two non-null-terminated pointers.
And yes, stderr would have been the better choice, but the important thing is to replace the implementations of dangerous functions with something that fails safely, and if you can't do it at compile or link time, it's still safer to do it at run time than to run the unsafe version.
Re: (Score:2)
I don't get it. Proprietary software has all sorts of serious vulnerabilities. Why is it that when a vulnerability is found in FOSS, you people all come out and mock it while ignoring all the incompetence of proprietary software?
OP's comments are worthless because it cherry picks a specific example to speak about a general category.
Your comments are equally silly..
Dude, man that Big Mac was awwwwefulll... Mc Donald's blows...
Why is it that "you people" all come out and mock it while ignoring the equally awful food served at Burger King?
As if it the commenter had some kind of duty to enumerate their disposition to everything else just to be "fair".
FOSS *is* more secure, and that's true even with the occasional vulnerability.
This is a worthless generalization that may be true or false depending on quality of s
Re: (Score:2)
Not necessarily: to come to this point, you need two things: development quality, and auditing quality. The first to create, the second to discover, the bugs. The second is what you get plenty of, in the open source world, presumably. But you assume that an open source developer is just as good as a closed source developer. That might not necessarily be true.
Re: (Score:2)
That is because all these people mocking FOSS have absolutely no clue, but try to elevate themselves by pretending to be smart. If they know how abysmally bad many/most commercial closed software products are (I have seen a few samples), they would think differently.
Re:Open source code is open for everyone (Score:4, Informative)
FOSS *is* more secure, and that's true even with the occasional vulnerability.
Loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool.
Its true *ESPECIALLY* with the occasional vulnerability because thats a vulnerability thats been found, publicised and fixed unlike in the proprietary shit where the vulnerability will be found by a limited group of people and kept secret so they can use it.
Re: (Score:2)
FOSS *is* more secure, and that's true even with the occasional vulnerability.
Loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool.
Its true *ESPECIALLY* with the occasional vulnerability because thats a vulnerability thats been found, publicised and fixed unlike in the proprietary shit where the vulnerability will be found by a limited group of people and kept secret so they can use it.
Oh, you mean those nice folks over in Eastern Europe?
Re:Open source code is open for everyone (Score:4, Informative)
FOSS *is* more secure, and that's true even with the occasional vulnerability.
Loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool.
Its true *ESPECIALLY* with the occasional vulnerability because thats a vulnerability thats been found, publicised and fixed unlike in the proprietary shit where the vulnerability will be found by a limited group of people and kept secret so they can use it.
Oh, you mean those nice folks over in Eastern Europe?
and the intelligence network of the 5 main english speaking nations...
Re: (Score:2)
When a FOSS hole is found... it is a hole... but not yet being exploited.
Where do you get the idea that when a FOSS hole is found nobody is exploiting it? How do you even know when one is found? You wait for somebody to tell you about it and assume that nothing is ever found by anybody until then?
Re: (Score:2, Informative)
You tried. Lots of people died, but the Union won the war. Trying again won't turn out better for you; you'll all die and your communities will be burnt to the ground.
The Civil War was a real thing. Us Americans are still willing to fight for our nation if you want to go that route again.
Re:Open source code is open for everyone (Score:5, Insightful)
So long as we're writing in C, this kind of thing (buffer overflows in particular) will probably continue.
(Lest I start a flame-war: C is awesome in its way, but more than almost any other language, it really does make it easy to miss things like this.)
Re:Open source code is open for everyone (Score:5, Insightful)
Re: Open source code is open for everyone (Score:2)
Because C is cool and Java and flash suck. Get with the program
Re: (Score:2)
Re: (Score:2)
You mean JVMs? They're generally written in C++. No, this doesn't entitle you to grin smugly.
Although it's essentially impossible for a well-intentioned Java program to have a buffer-overflow vulnerability, the JVM itself is full of security bugs, so bytecode from an untrusted source generally shouldn't be executed.
To put that another way: the Java language has proven to be successful in preventing things like buffer-overflow attacks, but the JVM has proven unsuccessful in providing a secure sandbox in whic
Re: (Score:3)
Similarly, people who think Java (or C#, or Javascript) will fix the problem of memory leaks, are probably leaking memory all over the place. Recently I've been fixing a bunch of memory leaks in Javascript.....if you attach something to the DOM, make sure you have a plan for getting it off, otherwise you're probably leaking.
Re:Open source code is open for everyone (Score:5, Insightful)
Managed languages (like Java and C#) give you a "secure-by-default" memory and execution model that's a lot harder to accidentally mess up.
If you think managed languages will prevent you from leaving security vulnerabilities, you are either not writing significant server software, or your software has vulnerabilities.
.Net is especially bad in this regard, not because C# is inherently more insecure, but because the community applauds and encourages ignorance, and even makes people feel bad for knowing things. See this presentation for an example [youtube.com]. Notice (for example) his micro-agressions against people who understand garbage collection. The implication is you don't need to think about it, C# will take care of memory.......which if you take seriously, means you'll be leaking crap all over the place and someone like me will have to come clean it up for you.
The hardest security problems to solve aren't the overflows, it's the features given to users. Think of VB macro viruses, that spread wildly in a managed language. Wordpress is another example of software written in a managed language with tons of exploits.
There are so many examples of exploits in managed systems that it's a display of ignorance to claim otherwise.
Re: (Score:2)
Managed languages eliminate C/C++'s largest (and most critical) attack surface.
Do they? Do you have data to back this up, or are you just guessing? Because from where I'm sitting, it looks a lot like the hardest security problems are the features you expose to users.
Something we can probably both agrees on is that there's no substitute for knowing how things work.
True, we do agree on this point. Although you contradict yourself at the end of the paragraph and try to come up with a reasonable substitute. When someone says "however" immediately after they say they agree with you........that's a strong sign they don't agree with you.
Re: (Score:2)
Most are language-independent.... no surprise to see CWE-89 (SQL injection) and CWE-78 (command line injection) in there, as well as the slough of crypto/authN/authZ-related stuff. But where are the language-dependent bugs coming from? If you drill down on the code examples for CWE-120, -131, -134, and -676, you'll see C and C++ are a re-occurring theme.
Good then we're agreed, buffer overflows are not the most common security vuln.
All we need now is for you to realize that, if someone thinks the language means they don't need to worry about security, then their code will be much more vulnerable, even if they write in Java. Once you realize that, then we will be completely agreed.
Re: (Score:2)
In the cases you cite, you can get the same safety in C++, and you really should.
First, you use some discipline with pointers. Owning pointers are either unique_ptr or shared_ptr, which eliminates most memory leaks (not those in a circular structure, though), and makes sure that memory is neither double-freed nor used after it is freed.
Second, you never use C-style arrays. Use string and vector instead, and you've elimina
Re: Open source code is open for everyone (Score:2)
C is like a powerful table saw. Don't practice safety and know what you are doing and you lose a limb. Powerful but not all should play with one.
Re: (Score:2)
C is like a powerful table saw. Don't practice safety and know what you are doing and you lose a limb. Powerful but not all should play with one.
Table saws have safety features that are not perfect but at least make it less likely to lose a limb. One could easily define a subset of C that also would make it far less accident-prone. Converting existing code to this subset would be painful but healthy.
Re: (Score:2)
Yes! Probably like Java.
Re: (Score:2)
Re: (Score:2)
Because none of those systems let them play Murdering Hoes In Defense of Ethics in Journalism at the highest frame rate.
I'm actually trying to offload as many tasks as I can to network-connected microcontrollers using Harvard architecture. Given the importance of networking, it might make sense to move the whole application API to a dedicated Harvard chip. There is really no reason for things like gethostbyname to be frequently updated. And then the network processor can have a direct connection to the ethe
Re: (Score:2)
You can call that same function from within most other languages even without realizing you're doing it.
It may be true that the vulnerable functions are called from other languages as well, but that does not necessarily mean these languages are also vulnerable. They may do sufficient memory management and/or parameter sanitation to avoid the vulnerability.
Re: (Score:2)
Does that affect your day-to-day work?
If you really want Java-written-in-Java, take a look at Jikes RVM.
Re: (Score:2)
I'm not terribly familiar with this particular exploit, but from the summary it looks as though it might have been avoided if they'd used automated reference-counting like C++ smart pointers.
Re: (Score:2)
Smart pointers are helpful though. Might've prevented this particular exploit.
Beware. Here Be Dragons. (Score:2)
Smart pointers are helpful though. Might've prevented this particular exploit.
Depends on what type of smart pointer we are talking about (as many people have shot themselves in the foot with them). There is a reason why std::auto_ptr is deprecated.
One really, but really has to understand a smart pointer lifecycle before using it to solve problems that can typically solved with careful programming practices, tests and liberal usage of tools like valgrind.
Re: (Score:2)
auto_ptr actually does what it's supposed to do, in terms of memory management. It doesn't play well with other parts of the language. It's deprecated because unique_ptr is at least as good in all ways, and better in some.
You don't really need to understand smart pointer lifecycles in detail; you have to know how to use them and stick to it. Raw pointers never own, so you never create an object using a raw pointer and you never delete a raw pointer. Raw pointers may be passed from function to functio
Re: (Score:2)
You don't really need to understand smart pointer lifecycles in detail; you have to know how to use them and stick to it.
Knowing how to use smart pointers implies understanding their lifecycles. Same with raw pointers (which pretty much do not have a lifecycle.)
Re:Open source code is open for everyone (Score:5, Informative)
Current glibc release is 2.20. That's three relases without the bug already.
Nothing to see here, move along.
Re: (Score:1)
Re: (Score:3)
Your assumption is only one person did an analysis.
Do you have any idea how many people have combed over glibc and either reported or exploited issues found?
Hell, read the article - THE PROBLEM WAS PATCHED before he found it. What we're talking about is some old distros are still distributing that code unpatched, and that's the real problem.
We can all jump to conclusions but, personally, I doubt the NSA have anywhere near the capabilities they (and you) suggest. These people are in the art of deception.