New Security Group Hedges Bets And Builds Hedges 121
7card writes: "ok i was just doing my morning surfing and i found
this article, which may be of some interest. It looks like the world has another club of security experts with the goal of security through obscurity. some of the members include Microsoft, Oracle, and Cisco." Reader Junin points to this CNET story as well.
Sovreignty and Corporate Intelligence Agencies (Score:5)
The real problem with the concept of 'private networks of information' is that they tend to grow, especially with the impetus this one has. It's in their best interest to keep as much of the knowledge they gather classified for as long as they can. If there is the perception that this kind of limited sharing is effective and one has to pay to become a member, there will be leaps and bounds of growth to this organization. Unlike the federal government, there are no laws in place to protect average citezens from this type of secrecy.
What's even more disturbing is the kind of actions this organization will eventually take to protect its secrets. At first it will be legal actions. They will sue to prevent people from releasing important security information. Then the proliferation of 'inter-agency' controls will increase, say giving back-doors to certain law enforcement agencies into certain applications. I'm certain this already goes on to some extent, but this gives tech companies a reason for this to become common practice.
How long is it before this kind of alliance has the ability to conduct its own 'Security' raids and anti-hacker activities through its contacts in law enforcement? Not too damn long, if I'm not mistaken.
What laws are in place to keep a corporation from harrasing and causing problems for an individual? Abso-fricken-lutely none. American business law is written to favor a business or corporation over an individual every single time.
Congratulations! You just lost! (Score:5)
Reminds me of of the US raid on Tehran. The special-warfare troopers were out in the middle of the desert, in a spot so remote nobody would be there looking for them... and they got discovered by a busload of people who stumbled across the area by virtue of getting lost.
Moral of the story: security through obscurity doesn't work. It's a numbers game, a calculated risk, and the risk involved is far higher than other more proactive forms of security.
Would you be willing to do all your online banking if your bank told you, "We don't bother to encrypt your financial records or firewall our system from malicious hackers--but don't worry! All the data is kept on a URL so obscure nobody will ever come across it!"
At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.
Speaking as a cryptographic engineer, I find this amazingly hilarious.
To this utter naievete, your typical malicious attacker would respond with:
... In other words, you're being just plain silly.
What's more than this is you're also lying.
At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.
No, you make sure your site is secure by locking down ports 21 and 23 for starters (telnet and mail). I know this because I just tried to telnet into them to see if they were open. So if security-through-obscurity is so darned good, why do you need to take the additional step of locking down your ports?
The answer is: because security through obscurity is a failed policy. Always has been, always will be. Locking down ports, on the other hand, is a smart and proactive policy.
Could this approach work? (Score:1)
Re:So What? Security through Obscurity works. (Score:1)
First of all, logically extrapolating, there's some substance to what you say: if someone *doesn't* know about the security flaws of a system, they can't deliberately exploit them. However, your "hidden treasure" analogy isn't valid. For hidden treasure to be a useful mechanism, it has to stay hidden. The best way to keep it hidden is to put it somewhere that people are unlikely to look, like in a hole in an anonymous sand dune on a faraway island.
A modern OS, application platform or Web data centre isn't a desert island with much sand and few shovels. It's a series of interconnected systems based on a limited (actual) range of semantics. And it's faced by people who understand black and white box testing, boundary cases of data and so on -- the equivalent of seismic sensors, metal detectors, ground-penetrating radar and the like. And they all have a pretty good idea of the contours of the landscape and where different systems meet, and might not quite tesselate. They also understand where sysadmins are lazy, or tend to be less able.
All this means a better analogy is a bank. Lots of people walk in and out, and everyone knows where the treasure is. It's likely easy to find out how it's protected, too. But it's still hard to get at, because of well designed systems, monitoring and response procedures.
By all means take your pick: dig a hole in your back garden (or hell, make it tricky, use someone else's garden) or put your money in the bank. But don't sell those two options as having the same security level in the real world.
Sometime security through obscurity is good. (Score:2)
Just look at the game Quake when it came out and after the source was released. Yes there was cheating when the source was closed, but there is a lot more cheating now that the source is open. There are ways to solve the problems if the source is open, but they would have been inefficient back in the days when Quake came out. Security is a tradeoff between performance and security. The higher the security the lower the performance of the product. In most products this is not a problem, but with a game that's designed to work over a modem, performance is critical.
Let's take and example from Quake. Quake sends extra data to a user for prediction purposes, like where a users location is even though the user is not suppose to see that person. This is so that if the user doesn't get a packet update with the other users location in time, then the users client side can predict where the other user will be and if he is about to pop out of a corner and be visible. Sure this data can be taken away from the user to prevent cheating, but then the performance of the game drops critically. The users have to be kept unaware that this information is available to them.
So what are some possible solutions? Well, not sending the data to the user is the best solution but then this will decrease performance. The users framerate and updates on what other people were doing would effectivly be limited by their ping time or modem speed. What about encrypting the data? Well, somewhere on the clients machine the user has to decrypt the data and perform calculations on it so the data is still available. How about hiding the data inside of a binary that the user does not have the source for and therefore does not know where to look for the data? This will prevent the user from finding the data while still allowing the client program access to the data, although only until the user gets smarts and finds where the data is hidden. This is a tradeoff between security and performance.
So what is the ultimate security protection? Well just have keystrokes sent to a server, then have the server render an image, a "screenshot" of the game, and then send it to the users machine to be drawn. This way no information is sent to the user except for an image which only the user can interpret. Is this very secure? Yes, but the level of performance would be horribly low. Somewhere a tradeoff has to be made between what information a user is allowed to see to increase performance and what information stays hidden.
That's just my splurge of the day, wonder if it makes sense...Re:Ah, yes. (Score:1)
Maybe I should get fitted for an eye patch? :)
James - Arrrrrrrrrrrr!
Kick ass! The rest of us are screwed! (Score:2)
This quote is really the best part of the article. Does anyone -not- see the hypocrisy?
"We have to put down our differences and our competitiveness and share more if we're going to prosper together," Mr. Copeland said. "If you're going to wall yourself off and not share, then you're going to be hurting. This will be a venue and a forum where we can start to build a level of trust."
Um, aren't these companies going to wall themselves off and not share with the rest of us?
Re:So What? Security through Obscurity works. (Score:1)
And therein lies the rub. The basic presumption is if they don't know, they can't find out. Except, of course, that they can find out. And if they do, you won't know about it until after something happens. Screw external attacks, look internally. Are you 100% certain that everyone who knows those deep, dark secrets are completely and totally trustworthy??
At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.
Shouldn't they be running a more recent version of Apache?
It is a helpful element in any security arragement, ever since Blackbeard buried his treasure in the Carribean.
Blackbeard had other deterrents to those who knew his secrets, and might be tempted to steal 'is loot. Your basic shooting, stabbing, beating, keelhaulling, or walking the gang plank all have their place. But for the most part, we can't use any of those tools...sigh
James
non-profit R&D (Score:1)
I'd like set up a non-profit company to develop my for-profit products, and then write off all my R&D as contributions to a non-profit organization.
any connection? (Score:1)
Presumably, a security hole, classified as a bug, becomes property of the consortium and a value added commodity.
This gives rise to a potentially revolutionary revenue stream for microsoft and friends...
Re:This is plainly antitrust! (Score:1)
In our nation's (US's) history, there have been four presidents assasinated. Two were saints. Two were mediocre. But four out of soon to be forty-three is a miniscule percentage. It's barely over 9%. No wonder our government and our leaders have gotten out of touch with the American people. They're not beholden to us anymore, because they no longer have the fear of the masses beaten into them. That must change.
There are four more days before Clinton leaves office. He must be shot today. And then we have to shoot Bush when we're done with Clinton. There must be blood on the floor tonight, if our government will ever learn to tread lightly on the liberties of man.
This information-cartel outrage is only the latest volley in an ongoing war we have been fighting since we decapitated Louis XVI and drank from his blood at the guillotine (ingesting his divine sovereign mandate and becoming a true sovereign democracy). Our leaders are out of touch and out of their minds. The mail isn't being delivered anymore. The snake problem is at a fevered pitch. A grown man can't even walk to the corner store without getting passing stares from gawking subversives. Is that the kind of world we want to leave for our children?
The information cartel must be killed tonight. Tonight. Tomorrow, we'll go after the guys who planned it. They must be brought to a pointy reckoning. Shudder them.
Let the rhetoric begin (Score:5)
First, the only way the Open Source security philosophy really works is if people ACTUALLY (as opposed to theoretically) sit down and read the code for security flaws in its entirity. I would argue that in a great many cases, no one even approaches this level. Because the Open Source community has very little centralization of effort, there is going to be a great deal of redundancy. In other words, even if you believe that 1000 security "experts" will spend some time reviewing the code, they may well be looking at the same piece of code (which in and of itself, can be a good thing), while leaving other pieces of code largely unscrutinized. Furthermore, I suspect that very few people truely give the code the time of day.
Second, while Open Source makes it easier for white hats to find flaws, it also makes it easier for blackhats to find and exploit flaws. This is particularly relevant if, as I point out, the code is not getting the right kind of attention from white hats.
Third, Closed Source can make it HARDER and DULLER to find flaws. Many people seem to assume that just because obscure products have been cracked, that there is absolutely no reedeeming value to it being closed. In other words, at any given moment in time, if we could some how have two parallel universes that would allow you to have the same piece of code (let's say the latest stable linux kernel with all patches applied) in Open Source and Closed Source at the same time, without knowledge leaking either way, most reasonable people would prefer the Closed Source option.
Fourth, security flaws are found all the time in Open Source code projects. A lot of them are presumably stable pieces of code that have already been put into production. These systems get hacked REGULARLY. Now this isn't to say the same doesn't apply to closed source, but you can't ignore the problem either way.
Fifth, many people constantly bring up the point "well if you just patch regularly...". While I agree that everyone SHOULD do this if possible, it's not always possible, and it's frequently not economical. If there is a piece of closed source code that hasn't had any published (or suspected) security flaws in 4 years of existence, while the competing Open Source alternatives have had many (constantly forcing their admins to patch), then that's a real issue for any competent admin.
Sixth, it's entirely possible for a Closed Source company to do a full internal security audit of their code. It may not be perfect, but it's better than nothing. Although I fully realize that hardly anyone does this, it'd be a mistake to ignore this as an option. If a company can get _most_ of the (presumed) benefits of an Open Source security audit without the corresponding exposure of their source code to blackhats (or at least less "risk" of that), then that might be very good indeed.
In summation, this is not nearly as black and white as people protray it. It comes down to numbers and many other unquantifiable elements. A simple philosophy is a not a one time cure-all. For instance, as I have alluded to, if there are very few white hats reviewing the code (say 50) and those white hats are mostly replicating their own work (say 15% efficiency) while allowing any black hat with proper monetary motivation to put the effort into cracking easy to read source code, then you might well be worse off. The same goes the other way around, if a software company, as all too many do, rush their product out with little to no review and depend entirely on obscurity, they might well use some routines that are well known security problems that can be easily searched for....
The bottom line is that it is just as stupid to assume your carelessness will be automatically covered by "peer review" (or "Open Source") as it is to assume it will be covered by "obscurity".
Re:Umm... (Score:1)
Obscurity != Secrecy. (Score:2)
If I pick 17 as one of my RSA primes, that doesn't change the algorithm. Okay, so I'm picking a stupid prime, but the algorithm is unchanged. If I pick a 300-decimal-digit prime, that doesn't change the algorithm, either.
"Security through obscurity" means "as long as I don't tell you how it works, then the system is secure".
Real security is "I'll tell you how it works, I'll tell you about all its known weaknesses, and I'll help you understand it inside and out--and it'll still work within its specified operational parameters."
In the case of RSA, part of its specified operational parameters is that the private part of the keypair is kept secret.
Where's the obscurity?
(Sidebar: cracking RSA does not rely on the private prime being obscure. For a very long time it was conjectured that breaking RSA was dependent upon factoring an extremely large composite number into two primes, but the recent attacks against PKCS1, etc., show that it's possible to stage cryptanalytic attacks against RSA that don't involve factorization.
RSA is based on three conjectures. One, that P!=NP. Two, that factorization is NP-complete. Three, that factorization is the only way to break RSA. Neither of the first two conjectures have been proven, and the third conjecture has been proven false.
That said, RSA is still a well-trusted algorithm. The non-factorization attacks are well-known and fairly easy to avoid.)
Re: So What? Security through Obscurity works. (Score:1)
A lot is known simply from a cursory inspection--site hosted by Hypermart [hypermart.net] on elderly FreeBSD, running software that anybody can buy from e-classifieds [e-classifieds.net] (and privately audit for security holes), etc.
Your site (assuming you in fact have anything to do with it) of course isn't utterly secure.
But people who rely on security through obscurity to protect their networks are simply waiting for trouble to happen. Because you're childish and "elite"?Re:The only way to true security (Score:1)
Join the frat... (Score:1)
Actually, what went through my mind first when I read, "Other technology firms will be able to join the alliance for $5,000 a year" was, "Gee, it's just like a fraternity. How nice it must be to pay to have friends."
The second thing that went through my head was, "Guess this is a big boys' club only. $5000 a year to join isn't much if your total assets (or asses) are zillion$, but it effectively puts most small businesses out of the running...and they're probably the ones who need something like this organization the most...if something like this is truly needed."
I don't in principle think it's necessarily a Bad Thing[TM], but even a kitchen implement in the wrong hands...
?!
RSA keys are not purely entropic. (Score:2)
If bit 0 is not set, then the number is evenly divisible by two, and it's not prime. If bit 511 is not set, then it's not a 512-bit prime (it's a 511-bit, or what-have-you).
Right there I've predicted two bits, out of 512. With more advanced mathematical techniques you can discover more properties about the binary representation of prime numbers, which helps you winnow out even more possibilities.
It's been widely conjectured that a 1024-bit RSA key is roughly commeasurate to about 128 bits of entropy. Of course, distilling entropic properties of asymmetric keys is more black art than formal science, so I generally err on the side of rampant paranoia and guesstimate a 1024-bit RSA key as roughly equal to an 80-bit key. Still plenty good for most purposes, but if you're worried about major governments, 2048-bit keys are appropriate.
Moral of the story: asymmetric algorithm keys must possess a large degree of entropy to be useful, but the key itself is not one hundred percent random.
Re:The only way to true security (Score:1)
It isn't that hard to edit a binary to include a trojan as well.
For example, you could find the part of
If you doubt this I encourage you to take a look at for example fravia's site. (Use google.)
But Obscurity Does Have *some* Value (Score:2)
> them to see if they were open. So if security-through-obscurity is so darned good, why do you need to take the additional step of locking down
> your ports?
Since you mention those two ports, out of curiosity, did the prompt identify the software running on those ports? (e.g., sendmail, postfix or exchange on port 23?)
Another simple step to take is to make sure that your web server always returns a 404 error if someone looks for non-existent pages. (You'd be surprised how many web servers don't do this, & cheerfully identify the software running instead.)
The reason I mention this is that I've seen it mentioned in several different places to disable self-identification of server software -- it's trivial to do for most of these applications, & it makes a cracker's job a bit more difficult.
No, if you take these measures you can't unsubscribe from your favorite security mailling list & still sleep soundly at night. These steps will only slow down the determined cracker -- maybe enough so that you can catch the miscreant in action & foil him.
Geoff
Ad Campaign (Score:1)
Just say "itty-sack"
Re:Announcing an arcane vunerability is better? (Score:2)
A much better example is that I bought my house lock from Acme and Acme keeps private how the innards works, not revealing the fact they just figured out that all hairpins in the world work just fine if bent in a certain way.
Re:Let the rhetoric begin (Score:2)
Yes, you have the opportunity to review the code, in addition to potentially reviewing the benefits of other people's security reviews. However, I feel that you, and many others, also overlook the countervailing side effects of it being open--its vastly increased exposure to blackhats through easier spotting of vulnerabilities and easier exploitation of them. In my opinion, if you're not looking at these two elements, the opportunity for peer review and the potential for blackhat review, as counteracting forces, then you're not fully grasping the situation.
In other words, different situations can totally change the balance. In one situation, closed source may be much more appropriate, while in another open source will be. This is a view that is simply not taken by a large part of the community, thus I shall continue making my point.
It's hardly fair to draw conclusions about closed source software based on MS's products and then compare it to a very limited open source kernel; even the apple has more in common with the orange. Though you may disagree, Linux is JUST a kernel and a rather simple one at that (not necessarily a bad thing, but it's relevant to the question), what's more it's highly modularized. These things fundamentally alter any comparison.
Sure, with closed source code you're completely dependent on the developers. However, you're overlooking, or not pointing out, other important factors. For instance, you're highly dependent on the developers anyways (be they open or closed). With, say, an RDBMS, your concerns extend not just to security but also data intergrity. So you're trusting them already; you're not entirely self-sufficient, no matter how smart, educated, skilled, or available you are. Furthermore, most closed source products are powered by a market driven mechanism. Any developer that alienates his consumers for too long with security concerns, faces their wrath, and hence, loss of profits. In addition, not all fixes are trivial. Few people have the time, skill, and ability to fix the problem for themselves. If the community cannot or does not fix the problems and you cannot, then you're up shit creek without a paddle. (Which is why the community's efforts are terribly relevant).
One issue that many people ignore is the importance of the developer that actually writes the code. The original developer has a HUGE impact on the security of the code. If that developer is either malicious or careless, security problems WILL happen--patches are not necessarily adequate (both from a timing and an implimentation POV). I would argue that given the "Openess" of Open Source code, you are potentially exposed to the laziness (or maliciousness) of hundreds of developers. Though it is true that many closed source firms pay little attention to security during the release process, closed source firms can also hold their developers more responsible. It's not necessarily always better one way or the other, but it can (and does) come into play.
Yes, but as I have said numerous times, this doesn't necessarily happen. If it doesn't, it can have tremendous negative effects on the effective security of the product. I would argue that it often doesn't at all and when it does the quality is all over the map.
What's more, much like sofware development in general, 99% of the work is performed by a relatively small group of "experts". (If you don't believe me, try reading bugtraq and other similar forums--you'll generally see the same people) In other words, I question the "wideness" of the review; if it's not wide, it hardly has an edge on properly reviewed closed source.
Re:This is plainly antitrust! (Score:1)
Re:Congratulations! You just lost! (Score:1)
Port 25 is mail, 21 is ftp, 23 is the default telnet port
Probably meaningless to security (Score:1)
I think this is more of a publicity thing. Over the past year or so there have been some pretty high-profile security problems that have made the software industry look pretty bad ('oh my, the mighty Microsoft got "hacked"'). I think they're just doing this so the general public thinks that they're taking security seriously. I doubt it will really change much about the companies' approach to security fixes.
Of course, that's the technical end of things. I don't approve of the social message that sends out. I don't like the trend that the industry is taking with regard to openness and accountability. It goes back to the thing about how the quality of something goes up when it's being developed under public scrutiny (why I love Debian so much!).
noah
Somewhat flawed logic in posting... (Score:1)
However, having a members-only club with sharing of information doesn't directly relate to security through obscurity. Saying that any closed source or hidden method of security is 'security through obscurity' just because its closed is a perversion of the term. Many closed systems actually have adequate security that wouldn't be compromised if the system were open.
token effort (Score:1)
Full Discosure vs. STO (Score:2)
The point of the full disclosure folks is that once a hole is found, it will be exploited by those who know. Therefore it is necessary for everyone to be aware of these holes in order to create counter-measures aimed at closing the hole. Exposing all security hazards also has the side effect of forcing software houses to release a security patch more quickly. Since no security hole is safe from hackers, it makes no sense in trying to hide them from the public since the public (or at least the malicious) is probably already aware of it.
The other side of the coin says that security holes should not be announced for the express reason of preventing massive exploitation of them. This line of reasoning has some solid evidence behind it. *Real* hackers with the ability to find these holes are few in number, but the script kiddies with virtually no skills whatsoever are legion. It is arguable that the damage caused by a few 'in the know' is far outdone by the damage of the kiddies with their point and click hacking devices. Likewise, by the time the exploits are known to places like Bugtraq and the various software houses, the hole has pretty much been well exploited by the discoverers. It then seems that hiding the exploits from the general public seems like quite the pragmatic thing to do.
So which is it? Disclose every exploit openly or hide them until they are fixed? I don't know.
Dancin Santa
Count me in. (Score:2)
Or maybe I'm just nuts... either way, it'd be damn funny (see ironic) to see /. & co. pull this off.
Re:So What? Security through Obscurity works. (Score:2)
Not unless script kiddies obtained some sort of "port scanning" software, of course.
If I know I'm being trolled, and I respond anyway, does it make me more or less of an idiot?
Re:Full Discosure vs. STO (Score:2)
If a security hole is discovered in a piece of software I guarantee you it will get fixed faster if the whole world knows its there. Companies have no incentive to fix holes unless they recieve pressure from outside. Your argument doesn't address that. This agreement doesn't address that.
This agreement makes it easier for these corporations to sidestep public criticism of their insecure software. Can we expect these companies to act outside of their best interests and expend invaluable resources to fix software that no one else knows is broken? Of course NOT!!
Fixing security holes takes programmers.
Programmers take money.
Corporations like to make money, not spend it.
Therefore, security holes will not get fixed.
It's that simple.
Agreed. (Score:1)
Then again, I suppose it isn't good business to admit that your primary email proggy (Outlook) is a bored script kiddies wetdream.
What I want to know is how long do they think they can keep this info from the press? Leaks, Bad!
Re:Congratulations! You just lost! (Score:1)
4. That sounds like a challenge! You're on.
--
Pseudo-openess won't work... (Score:2)
What will likely happen is that every bug that comes up will be seriously considered for political and economic fallout and they'll only allow the information that's relatively safe to them get out to this group. So, only the truely innocuous bugs will get dealt with and the big nasty ones will still be out there.
And you know where the nasty bugs will get discussed? Bugtraq!
I'd save my money if I was Cisco or Oracle or what have you. The only possible value in this is getting some dirty laundry on your competitors and that's only if their dumb enough to tell you in the first place (and if they are that dumb they'll be dead in a couple years anyhow).
---
Re:Count me in. (Score:1)
Re:The only way to true security (Score:2)
The funny thing is that they are trying to emulate the spirit of open source while still remaining closed. They want to "share" information that could be of great help to them, but they don't want to share that information with the public at large. Something about that just strikes me wrong. Their idea is that they are protecting us (the public) and them from more debilitating attacks, but isn't this entire idea flawed? As the poster I am responding to said, security through obscurity just doesn't seem to work.
Granted, open source isn't perfect. But it seems to do the job pretty well. And apparently the businesses involved in the creation of this new "security" group is aware that an open policy can do some good. But their idea that only they (as in the special multi-national interests/corps) should have this "open" information seems kind of a deterrant to the idea of "open" information.
Opening up your information to a bunch of like-minded individuals in similar situations probably isn't going to solve underlying problems any more quickly. It's the fact that such hugely diverse people can look at the same problem from so many angles that open source projects can solve security problems quickly (when they need to). Letting someone with a fresh and possibly completely new way of looking at something is always good for any project.
But, another way of looking at this is that they are going out of their way to adapt as many open source ideas as they can without truly admitting that open source ideas work. Maybe eventually someone there will get a clue that if opening things up amongst the companies was good, perhaps opening up further would be better. I don't really see this as a conspiracy. But I think it's kind of funny. Like one of the AC's in this thread said, they've set up their own little closed source version of the OSS community. And the AC is right, it is kind of cute, in an odd way.
Re:Bugs ? (Score:1)
Re:Full Discosure vs. STO (Score:1)
My thought: have a consortium analogous to this, but include every sysadmin for every major and minor company. (Perhaps even mailing lists for specific issues... eg. HTTP vulnerabilites get mailed to the Apache group, MS IIS team, etc., but not to, for instance, Intel.)
Of course, with this many people involved, the news will leak out anyway. But it will at least give the people who are going to fix the problem an insider's edge -- so that, as the story hits the front page of the newspaper, the OSS guys are already posting a patch.
It's worth a try, anyway.
Re:I'll toss in $5 (Score:1)
Re:The first attack they can share... (Score:1)
I started using it recently at work, at it's muucchh better than previous releases. Still not quite ready for prime time, but damn close.
Oh please (Score:2)
Link (without clutter) (Score:2)
My, the original link really made my eyes hurt, even with Junkbuster [junkbuster.com].
Announcing an arcane vunerability is better? (Score:1)
Re:The only way to true security (Score:1)
Actually Microsoft, Sun and many other closed source vendors do have their code verified by 3rd Parties in order to gain ITSEC [itsec.gov.uk] (now CC) evaluations.
Re:Congratulations! You just lost! (Score:1)
But really, your point about the poster being one with a grudge, and directing you to the target of the grudge. On the other hand, this illustrates where security through obscurity does work - misdirection. Sure, the first time someone looks at your hand instead of what you are pointing at, you are a sitting duck, but all the time it does work is time that no one scruntinizes the real security that's in the hat.
+1 Mixed Metaphors
Boss of nothin. Big deal.
Son, go get daddy's hard plastic eyes.
Re:Let the rhetoric begin (Score:2)
What this probably protects from the most is security holes introduced intentionally by the authors, whether that is sanctioned by the vendor or not. Take the case of so many systems with backdoor passwords. Open Source exposes this, if someone were to be stupid enough to do it.
Everything is easier. But whether the proportion favors white hats more than black hats depends on how many of them are looking. Consider that most users of exploits do so with exploit tools they download, as opposed to discover and code for themselves. I do suspect the whitehats way outnumber the blackhats.
I would not make that choice. Of course exploits are harder to find with closed source. But this just results in a greater time delay before they are discovered. The number of blackhats is reduced somewhat while the number of whitehats is reduced radically, with closed source.
Is it being ignored? I don't think so.
This seems to me to be a good argument for closed source. There is a time dampening effect by closed source that makes it possible for admins to avoid doing the patching. But I've found that with a lot of other good practices, this isn't that much of a difference.
It's also entirely possible for a Closed Source company to not do an audit at all, or do bad one, or hire an untrustable auditor to do it. Open Source gives the end user the option to choose from available audits or hire their own. Granted, the choices are few, but in theory, open does open this possibility.
I generally have distrust for commercial software. The primary reason is because of the time pressure to "get it out the door" which ends up sacrificing things that need to be done, but get put off (often forever) in order to meet the deadline which marketing has already established.
I would agree. Being open in no way makes something more secure. It provides the opportunity. The opportunity still has to be taken advantage of, and that isn't always done. And there are some totally lousy programs out there not even worth spending the time to audit.
Re:This is plainly antitrust! (Score:1)
Woopdi -Do!!! (Score:1)
If your really looking for a corporate group to fear try the World Economic Forum. A thousand of the most powerful CEOs, bankers, politicians, media moguls, etc. meet in Davos Switzerland every year to decide global economic policy, i.e. how to increase the flow of money from the lower and middle class to the upper class.
And as for the poster who suggested that Bill Clinton should be shot over this why don't you try moving to the Congo, the president was just shot and killed there today. Let us know in a year or two whether you enjoy living in a country where policy is made with a gun rather than through free and open debate.
Re:Sovreignty and Corporate Intelligence Agencies (Score:1)
Add to this that there are more private security personal than police in the USA already.
Microsoft and Oracle together? (Score:1)
Re:Let the rhetoric begin (Score:2)
Maybe, maybe not. That's an important assumption that the open source community makes and tends to pay little attention to. I'm not saying it IS, but it's quite possible that there is more incentive to be a blackhat (ethics aside) then a whitehat on certain projects. Is there any real empirical evidence one way or the other? Can you say with certainty that the black hat community has discovered a significant number of flaws long before the white hat community with closed source software? It seems to me that they're quite rare in both cases, at least in terms of what is publicised. And if it's not publicised, we have a hard time knowing.
All of this comes down to numbers, which none of us really has. If it's a matter of mere delay and open source is so superior, then we'd expect there to be a clear difference empirically. I'd argue that this difference is not nearly as wide as many suggest, and often times is less than favorable for Open Source.
Anyways, I dont have time to finish this argument off, I've got a bunch of work to finish off. Maybe later...
Americanwicca.com has been advised. (Score:2)
No, I'll call you ragingly paranoid--which is good, that's a compliment, I like that.
At any rate, I sent off mail to the people over at Americanwicca.com, telling them that they might be the target of malicious attacks as the result of that Slashdot post. So we've given them some warning, which is about all we can do in this situation.
Hedge Shortcuts (Score:1)
Translate to computerese to fit your desires.
Re:Secret Only If They Find Flaw First (Score:2)
Good point. Even if these organizations do attempt to close ranks, it only takes one employee with access to the reports and willingness to leak them to ensure that outside parties "discover" the same holes that the club members do.
This is a non-event (Score:1)
(Mr. Jones) uh... uh... what's port 23?
(Mr. Smith) (inaudible) Oh that's the Frabazz port.
(Mr. Gates) When I was writing DOS...(inaudible) any ports at all.
(Mr. Wilson) um...fscken kiddies
And so on.
What will come of this is blathersgate. These fellows will have a marvelous time pulling one anothers' puds and releasing statements. Nothing productive will emerge.
I'll toss in $5 (Score:4)
So, I pledge $5/year for this endeavour.
The first attack they can share... (Score:1)
Some evil haxor has hacked MSNBC so that it won't work with netscape.
Old Boy's Club (Score:1)
The only way to true security (Score:1)
It has been proven time and time again in life and with computers (especially NT) that trying to hide security holes just doesn't work.
So, keep your code open and let others find it's flaws.
Re:So What? Security through Obscurity works. (Score:1)
I sure wouldn't, knowing that anyone who decides to get information about my website would be able to crack it.
Lets compare this to a game of hide and seek.
If you have played hide and seek, you know, that no matter where you hide, you will be found. Unless you are in a place that the seeker cannot get access too, you will eventually be found.
Re:So What? Security through Obscurity works. (Score:2)
Ys... in your example, in the physical world, if you can't find it you cna't have it.
That's also it's weakness. The main point is that there is no way that you would even know. As with most security holes in closed programs, no one knew... or really had the capability to know until that one person found it.
It may have taken a while but things like "Netscape engineers are weenies" do get found.
Information cartel... BS (Score:2)
Let's take a look at this page:
http://www.w3.org/Consortium/Prospectus/Joining [w3.org]
Hmm, looks like joining W3C costs 50 grand a year for a company, nearly ten times the amount proposed by this security group. Non-profit/educational access costs $5k annually, the same price as this security group. How come nobody accuses W3C of being an "information cartel"? Simple... it's not, and neither is this group. $5k per year is nothing for a company that is interested in security issues, even a small company.
I'm not sure I understand... (Score:4)
Even the mighty Linux community sometimes keeps vulnerabilities secret until a fix is released.
What makes this security through obscurity rather than good security practices?
Won't some strong virile slashbot please explain it to timid pert little me?
--Shoeboy
Re:So What? Security through Obscurity works. (Score:2)
I am being fair and unbiased. Security through obscurity never works.
Read a few books on cryptography, and then come back with a clue.
Somebody as naive as you should NOT be using the ship name of an AI several billion times smarter.
If Banks were dead, he would be turning over in his grave.
PS. Nice Troll.
They should just use L0pht (Score:1)
Hidden truth (Score:1)
Imagine what it would be like if the whole outlook executing any old VBscript(not that i have to worry with the Os I run) but the public should know about these things, it just gives corporations another way to cover shit up,... next they'll be forming their own government
Ah, yes. (Score:1)
Blackbeard had other deterrents to those who knew his secrets, and might be tempted to steal 'is loot. Your basic shooting, stabbing, beating, keelhaulling, or walking the gang plank all have their place.
--
Re:I'm not sure I understand... (Score:3)
Trust me. I worked for a company that has been featured on BugTraq once or twice. If not for BugTraq, our "fixes" of the vulnerabilites would have been limited to the simple work-arounds that our clients wanted. The holes would NOT have been closed fully; its just too much work.
Re:This is plainly antitrust! (Score:1)
Most corps have caught on. Work with and through the government. Don't work against them. Maybe MS has finally adopted this idea too. With Gates out of the real lead, perhaps we will see them making some more smart moves.
But seriously, this probably isn't that big of a deal. Just an attempt to 'open up' on what they consider industry secrets with others within the industry. Unless they start colluding on prices (price fixing) and features (you offer this, I offer that and they have to purchase both), I don't think the government could intervene even if they wanted to. But I could be wrong on that. My legal interpretations occasionally are questionable.
Re:Congratulations! You just lost! (Score:1)
*cough*ftp*cough*telnet*
I need some *cough*smtp=mail=25*cough* medicine.
But your points are well taken...I have to wonder, tho, if the original poster has a grudge against americanwiccan.com? Call me cynical, but I suspect something like that...
James - I summon the unholy demons of apathy, sarcasm and cynicism!
Re:Let the rhetoric begin (Score:2)
One reason I suspect that close source software is worse off when it comes to security is that a lot of the security risks associated with closed source software like Windows is ill thought out hooks for future enhancements and occasional deliberate back doors; that's the kind of stuff developers catch on an open source project when they look at the daily check-ins. Another is simply that companies like Microsoft are not forced through exposure to their customers to have any consistent coding standards or conventions; for example, their programmers can go on merrily using "gets", even though in an open source project, silly bugs like that would be caught very quickly.
Re:Sometime security through obscurity is good. (Score:2)
In the example given, the solution is more likely to make life faster so you can dump the whole frame down to people, or guarantee that they won't drop a packet in time. Kill the problem at source, don't work around it.
Ever wondered how ssh and gpg manage to be secure if the sources are available, the private key passphrase and data gets stored in memory?
~Tim
--
Re:I'll toss in $5 (Score:1)
Full disclosure every time (Score:2)
Where's the incentive for a corporation to fix a security hole if they know that they can effectively keep knowledge of the existence of that hole a secret? Fixing problems costs money, covering something up is (usually) easier (i.e. cheaper) if you can catch the problem before knowledge of it grows out of hand.
Your point about script kiddies is well taken, however you have to admit that nothing motivates a corporation to fix a problem more than public attention on that problem.
My opinion (for whatever it's worth), is that attempting to keep knowledge of flaws in your product a secret is self-serving and unethical. At the very least, even if you don't have a fix for the problem, your customers deserve to know that the problem exists and if there is any way they can work around it. The corporations are *supposed* to be in business to serve their customers, not themselves.
--
Re:Hedge Shortcuts (Score:1)
--
Re:The first attack they can share... (Score:1)
url: http://www.msnbc.com/news/default.asp
What the hell is wrong with "security-through-obsc (Score:1)
I have always used the analogy that crytography is like a safe, it limits pysical access to the data/material. But you don't put your safe in the middle of a room, you hide it in a closet or in the floor, etc. This provides another layer of protection, of varying efectiveness.
Some famous bank robber was asked why he robbed banks, he replied "because that is were the money is." Even though this was a humorous remark, it makes my point that because everyone knows that banks have money, that is where they go to get it. But what if you coudn't tell a bank from any other building? Then you would have to find out which damn building to rob before you could actually rob it. This ends my rant.
Re:So What? Security through Obscurity works. (Score:1)
The situation is much worse with respect to the internet, in which there is a small (?) army of script kiddies, all armed with metal detectors and pickaxes, randomly digging holes all over the place for the sheer destructive hell of it, and in which you've conveniently placed a sign (your URL) on top of your treasure. The question isn't whether one of them's going to find the treasure, it's how far will they have to dig and will they be able to break the lock on the treasure chest when they get down there.
Not "collusion" unless... (Score:3)
Cheers,
IT
Re:Congratulations! You just lost! (Score:1)
Moral of the story: security through obscurity doesn't work. It's a numbers game, a calculated risk, and the risk involved is far higher than other more proactive forms of security.
I don't follow your logic, you're saying that since you know one story where the govt failed to hide their operation, then security through obscurity doesn't work. If you can't think of more than a handful of those stories for every war we've had, then the govt obviously has had far more success than failure with their technique.
Also, if a single example proves something to be a worthless concept, then security without obscurity has also had plenty of its shares of defeat.
Would you be willing to do all your online banking if your bank told you, "We don't bother to encrypt your financial records or firewall our system from malicious hackers--but don't worry! All the data is kept on a URL so obscure nobody will ever come across it!"
Security through obscurity doesn't nessesarily mean that their security IS obscurity. They would have regular security measures in place, it's just that they wouldn't release exactly what they are.
2. "We don't use any security measures to speak of"
Same thing as above, they're not saying that their obscurity is their only security, only that they believe obscurity enhances it.
Re:Secret Only If They Find Flaw First (Score:1)
--
Accidental Events? (Score:2)
An accidental event?! I can see it now: "Whoa, what was that? Did I just overflow a buffer or something? What the fsck is that root shell doing there????"
--
Re:The only way to true security (Score:1)
yay! (not) (Score:1)
These companies are composed of people, people who could leak these newly found vulnerabilities to the script kiddies anyways, or use the vulnerabilities themselves.
easy to make this the dominant security force (Score:1)
Ok. Done...
IT-ISAC Reporting Society: Join our reporting society and earn $400 cash for any new security exploit that you find and report! Terms & Conditions: ..... blah blah .....
5(g) Reporter agrees to refrain from disclosing to any third party and refrain from publishing, communicating, transmitting, or posting the Exploit, in any manner, other than as provided above in 2(a) Reporting Procedure. ..... blah blah .....
If you've found something, unless you have a strong personal interest in free security information, why wouldn't you want to make a few bucks?
divide et impera (Score:1)
They will understand that, even if it won't improve the actual situation, a "security trough obscurity" environment will at least give them some sense of power.
They will put aside all rivalities and join together to create at least an alliance against a common enemy: Insecurity. But how will they understand that the great majority of the dangers involving security are due to human incompetence?
Will they virtually stand up against idiocy? Will they fire their own emplyers because they can be the weakest point of their network?
Or will they just cover this up and create a virtual enemy, pumping the figure of the 'oh-my-god-it's-scaring' so-called hacker?
Now they are rivals, but soon they'll be together (as who knows how many other companies) against freedom of knowledge, against the fact that the human being must learn trough its mistakes, and so on.
My only hope? That they'll soon break this alliance, because there's no such thing as a common enemy, and if they won't understand, they'll just fight against Windmills until they'll be tired. They'll create smaller and internal alliances, they'll fight each other assumptions, and on the long run nothing will change.
Perhaps I'm hoping too much..
As the Latin said, "break up and rule": The 19 founders represent some of the industry's largest firms, but they come with historic rivalries. Cisco and Nortel Networks compete bitterly in sales of computer-networking hardware. Microsoft was found to have violated antitrust laws to influence contracts with AT&T and IBM; Oracle has admitted to hiring private investigators to dig through the trash of groups supportive of Microsoft. Can these companies, in an industry known for unusually aggressive executives, ever trust each other?"
This is plainly antitrust! (Score:5)
Having a cartel like this is not only unnecessary; it's plain wrong. It simulatneously flies in the face of libertarian notions of self-help and of liberal notions of the omnipotent government who can protect citizens corporations on its own. Like so many areas of our economy, things were just fine until the corporations decided to start merging into one giant monopolistic hairball. I urge you all to write your congressmen and senators. This must be put to a stop.
True innovation (Score:5)
I wish I had $750,000 dollars to sink into a non-profit center so that I could email, telephone, fax and/or page my friends when something important happened.
*Sigh*
Too bad only big buisness has these capabilities. I guess I'll go feed my carrier pigeons now.
--Shoeboy
watch out oss developers (Score:2)
With the current boom in open-software products and the increased visibilty to ordinairy computer users some of the Industries (monopolistic computer firms) decided to team up to be able to tackle these problems. "Linux is getting too big and these hackers are causing us way to many losses" said one OS rep (who then took Jobs style approach and started cursing!).....
ohhhh wait wait... stop the presses, thats the wrong story... this ones about crackers......
What no domain? GRIN (Score:2)
I can see the news story now "The Information Technology Information Sharing and Analysis Center website, used to share vital security information among members including Micro$oft, Oracle, Inhell, and more, has been shutdown after it was discovered that hackers had broken into it months ago and had replaced the real security and hacker info with false information making it even easier to gain access to systems from these companies"
Dumb idea (Score:4)
Another way this is bad: we have CERTs for a reason - to deal with this kind of thing. By forming this "coalition", they're further fragmenting the system of disaster recovery. CERT.org [cert.org] was created some time ago just for things like this, and it doesn't cost $5k a year to get warnings. It's free.
Propaganda is the best term for this, and marketing is a close runner up. If they really want to team up and help stop attacks on computer systems, they can work with everyone else instead of creating a members-only club.
My karma's bigger than yours!
Re:Dumb idea (Score:2)
Yes, that's true, but I have doubts about how long this group will LAST. Let me explain by first quoting from the article:
The 19 founders represent some of the industry's largest firms, but they come with historic rivalries. Cisco and Nortel Networks compete bitterly in sales of computer-networking hardware. Microsoft was found to have violated antitrust laws to influence contracts with AT&T and IBM; Oracle has admitted to hiring private investigators to dig through the trash of groups supportive of Microsoft. Can these companies, in an industry known for unusually aggressive executives, ever trust each other?
Distrust and fear will likely keep this group from taking off.
If a company with billions of dollars in revenue had some inside dirt on one of its multi-billion dollar competitors... "Hey! With this info, we could bring X to their knees. Nah, we couldn't do THAT!" Ya, right.
As soon as it even APPEARS that something like this has happened, the whole group would likely begin to collapse from distrust and fear of having the same done unto them. A little less is revealed, and then a little less. Heck the whole idea for the group is to keep information from others they don't trust... just how long can they trust their greatest competitors?
Treaty of Rome, Article 85 (Score:2)
<SLASHBOT>
The EU will soon be *easily* the largest economy on the planet (except China. OK, Maybe India. You know what I mean). 500 million eager consumers with shedloads of cash. Enough cash to support some *very* fat lawyers. In the EU, we send our fattest, most offensive lawyers to Strasbourg, where they can do most harm.
Then we have this little thing called the Treaty of Rome, which has much the same purpose as the US Constitution, except you can't fit it on a sheet of A4, no matter how 'leet your PostScript skillz are.
Article 85 of the Treaty of Rome says some interesting things [antitrust.org].
One of the things it explicitly forbids is arrangements to establish contractual conditions that bear no direct connection to the subject of the contract, like tie-in clauses.
Now, If global giants like Sun, Cisco, Microsoft etc. use a forum like the one they have just set up to restrain trade, you wouldn't need a lawyer to win an antitrust case against them My blind old dog (if I had one) could win it.
</SLASHBOT>
So, there you go. If they do *anything* that pisses off the EU commission, they'll get nailed to the proverbial tree.
For those too stupid to work out how to get rich here, all you need to do is to start up a tech company that relies on one of their products in a way that directly competes with them or one of their "valued partners", wait for a security flaw to be announced, prove that they did not disclose it to *all* their customers at the same time and *BLAMMO!* a lot of fat lawyers get even fatter over a period of several years.
If I had ~50 million Euros to burn, I'd do it.
Share and enjoy.
Re:This is plainly antitrust! (Score:2)
The stupidity and insanity displayed by the current government of the US of Corporate America is enough to piss anyone off. But let's be realistic. We don't need the politicians to fear us. We need them to respect us. And having someone kill politicians is just going to give them an excuse to further erode our freedoms. If you give them a reason, they will leap at the opportunity.
I agree with your sentiment, but the time of killing the president just because your pissed off at our lousy government I believe to be at an end. Here's an idea, how about trying to convince the rest of America to get off their ass and vote their concious. Don't listen to the assholes that say we are "ruining" our society if we vote for anyone other than the big duopoly candidates and vote for someone that will make a change.
We the American people are just as responsible as the assholes that are the presidents and congressmen of this nation. Sad as it may be, you shouldn't kill them just because they didn't do what we wanted them to do when we voted them in. We will not earn their respect, or their fear by killing off a few of them either. All we will earn is more breakdown in the basic cloth of freedom. The reasons are simple. They already think we are criminals just because we breath air. If we give them that one little excuse, they will slam down the iron fist they have so long been hiding behind the velvent glove.
However, having said that, I think that a full-scale revolution might do the trick. It will take more than just removing the president or any other "key" members. Until you eliminate the entire process, it's going to remain the same. But let's face it. Most American's are obsessed with laziness, and revolutions are hard work. The American people, the people that should be concerned with the constant erosion of their freedoms (in the interest of protecting them from their own stupidity), are far, far more interested in sitting on their couches, throwing back a few doritos and beers, and watching the latest garbage the information cartel is shoving down their throats through the "magic" box.
Re-educate the masses. Eliminating the stupidity at the top will not eliminate the stupidity throughout the system. That stupidity is rooted in the American people themselves. Hopefully someone can figure out a way to wake people up. If not, I'm afraid our children are going to be left with a shitty world.
Closed Minds (Score:2)
If you ask me, there should be less reaction to this sort of thing and more action. I don't hold a lot of faith in the big companies any more. I believe in the little fellows who work on stuff like the BSDs (now -they- understand security issues).
Hell, that Interbase backdoor wasn't dealt with by Borland/Inprise, but by OSS hackers. I say bring security concerns into the light, and let some more open minds worry about things like this. As a user and developer I would like not to be left in the dark by these close source, and closed minded people.
Missing the point (Score:2)
Talk about letting the rhetoric begin. You build up this big straw man and expect people to kock it down. Well OK, "poof" your straw man is blown down.
The Open Source argument is about access. Its about giving everyone (yes, even the bad guys) access to the source code. In a closed source world, the bad guys may already have access to the source code, but you certainly do not. The opportunity to find and fix things, such as security vulnerabilities (and backdoors) exists.
If you can't grasp this, then you've missed them entire point behind the free (as in speech) software movement.
The "security thru obscurity does not work" argument refers to security that depends on obscurity to succeed. If your entire security model rests on the proposition that no one must even find out how it works, then your security model fails the moment that obscurity evaporates. Which is a bad security model. Plain and simple.
Python
Secret Only If They Find Flaw First (Score:4)
This policy will only matter in the event that someone within one of these companies is the first person to discover the flaw.
Given that many flaws will be found by people outside of this group, and that it only takes one source to leak a flaw, I doubt this supposed secrecy will be very secret.
Re:Missing the point (Score:2)
Second, the reason I make my point is because there are a lot of different mantras that are OFTEN repeated by open source advocates, so-called security-buffs, slashdot zealots, etc. Just because you have not heard them does not mean they do not exist. It's awefully presumptious of you to ASSUME that you have heard it all. If you doubt what I say, then simply go to numerous forums and read them more carefully, even slashdot, and I assure you that you will hear words to that effect.
Third, I submit to you that Open Source's contribution to security is neglibible for 99.99% of the population, if the "entirity" of the argument is that "YOU" (as an individual, without the hyped "sharing" of security amongst those individuals) can fix everything yourself. In other words, unless you know C, security, kernel architecture, etc. well, the odds are that your argument simply does not apply to yourself. What's more, even if you have the skills to detect a bug or two, you probably don't have the inhuman ability to fix them all--it just takes ONE to send you up shitcreek.
Fourth, many people CARE about my point, because I'm arguing effective security--not your moralistic "liberty", "freedom", or what have you. The upwards moderation would suggest that at least a few people think my comment worthy.
Fifth, I have other related points, that apply directly to the comment I was replying to one level above this (his viewpoint ties into one of the mantras). For instance, he essentially stated that it's impossible to put a backdoor in Open Source code, while it's possible in Closed Source. Well, there is no real defence for his position and I attacked it.
No, that depends entirely on who you ask. Many people take "security through obscurity" to mean that the only way that closed source software is secure is because it's obscure. Thus any closed source software is percieved as insecure and, conversely, open source software is necessarily (more) secure.
Sure, I'd generally agree with that statement. However, it is a little too broad. It depends on the environment, the users, etc. Much like the argument for certain cryptography, if cracking it costs more time, effort, and resources then what you could gain, it IS effectively secure. So plain and simple? No.
Likewise, the same applies to Open Source code (as so many people ignore). You can't merely code something, make it open, and just trust it to be secure--no matter how long you wait. Which raises another point (if you accept the previous position), you MUST trust the developers if you CAN'T entirely trust the "peer review" process (or yourself).
well, ok then... (Score:2)
Personally, and maybe I'm off-base here, I think a more public forum - though significantly more discreet than modern media - would better suit addressing security issues than a privately vested group. I mean, great, now all the "big" tech companies are helping to cover each others asses. But who's looking out for the mid-sized companies, the small companies? Sure, we could say that the big fish are going to be targets for problems more often, but that's really narrow minded and a bit selfish.
Anyway, I'm glad to see this happen, but I would feel better knowing that they were looking out for more than just themselves. Perhaps I'm becoming more ideallistic lately? I don't know. Perhaps I misread what the article was saying? Anyway, there you have it, my (our) take on things.
Looks like we missed out on some juicy patent discussions whilst we were out... damn.