Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

New Security Group Hedges Bets And Builds Hedges 121

7card writes: "ok i was just doing my morning surfing and i found this article, which may be of some interest. It looks like the world has another club of security experts with the goal of security through obscurity. some of the members include Microsoft, Oracle, and Cisco." Reader Junin points to this CNET story as well.
This discussion has been archived. No new comments can be posted.

New Security Group Hedges Bets And Builds Hedges

Comments Filter:
  • by Bonker ( 243350 ) on Tuesday January 16, 2001 @01:13PM (#503617)
    Well, this is a dangerous step toward the evolution and existence of sovreign corporations.

    The real problem with the concept of 'private networks of information' is that they tend to grow, especially with the impetus this one has. It's in their best interest to keep as much of the knowledge they gather classified for as long as they can. If there is the perception that this kind of limited sharing is effective and one has to pay to become a member, there will be leaps and bounds of growth to this organization. Unlike the federal government, there are no laws in place to protect average citezens from this type of secrecy.

    What's even more disturbing is the kind of actions this organization will eventually take to protect its secrets. At first it will be legal actions. They will sue to prevent people from releasing important security information. Then the proliferation of 'inter-agency' controls will increase, say giving back-doors to certain law enforcement agencies into certain applications. I'm certain this already goes on to some extent, but this gives tech companies a reason for this to become common practice.

    How long is it before this kind of alliance has the ability to conduct its own 'Security' raids and anti-hacker activities through its contacts in law enforcement? Not too damn long, if I'm not mistaken.

    What laws are in place to keep a corporation from harrasing and causing problems for an individual? Abso-fricken-lutely none. American business law is written to favor a business or corporation over an individual every single time.
  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Tuesday January 16, 2001 @01:27PM (#503618)
    Imagine how secure your data would be if nobody knew where it was except you

    Reminds me of of the US raid on Tehran. The special-warfare troopers were out in the middle of the desert, in a spot so remote nobody would be there looking for them... and they got discovered by a busload of people who stumbled across the area by virtue of getting lost.

    Moral of the story: security through obscurity doesn't work. It's a numbers game, a calculated risk, and the risk involved is far higher than other more proactive forms of security.

    Would you be willing to do all your online banking if your bank told you, "We don't bother to encrypt your financial records or firewall our system from malicious hackers--but don't worry! All the data is kept on a URL so obscure nobody will ever come across it!"

    At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.

    Speaking as a cryptographic engineer, I find this amazingly hilarious. :) Guess what--you just did release a detail. Namely, you said that your site's entire security is based around your refusal to release details. That means that you've just announced to the world:
    1. "We think our site is utterly secure"
    2. "We don't use any security measures to speak of"
    3. "There's something we consider important at that site, otherwise we wouldn't feel the need for utter security"

    To this utter naievete, your typical malicious attacker would respond with:
    1. (convulsions of apopletic laughter--no site is "utterly secure")
    2. "Sounds like an easy mark"
    3. "Geez, they've got something they think is important AND they don't use any security except the general lack of knowledge of their site? What was the name of that place again?

    ... In other words, you're being just plain silly.

    What's more than this is you're also lying.

    At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.

    No, you make sure your site is secure by locking down ports 21 and 23 for starters (telnet and mail). I know this because I just tried to telnet into them to see if they were open. So if security-through-obscurity is so darned good, why do you need to take the additional step of locking down your ports?

    The answer is: because security through obscurity is a failed policy. Always has been, always will be. Locking down ports, on the other hand, is a smart and proactive policy.
  • It may seem to work for the Banking industry... but how could you even tell? No way will these kind of companies be able to communicate sensitive internal problems with rivals in a timely manner. Management will stifle it every step of the way.
  • Obviously, I'm assuming this is a troll. But just on the offchance it isn't, let's talk about the problems of your point.

    First of all, logically extrapolating, there's some substance to what you say: if someone *doesn't* know about the security flaws of a system, they can't deliberately exploit them. However, your "hidden treasure" analogy isn't valid. For hidden treasure to be a useful mechanism, it has to stay hidden. The best way to keep it hidden is to put it somewhere that people are unlikely to look, like in a hole in an anonymous sand dune on a faraway island.

    A modern OS, application platform or Web data centre isn't a desert island with much sand and few shovels. It's a series of interconnected systems based on a limited (actual) range of semantics. And it's faced by people who understand black and white box testing, boundary cases of data and so on -- the equivalent of seismic sensors, metal detectors, ground-penetrating radar and the like. And they all have a pretty good idea of the contours of the landscape and where different systems meet, and might not quite tesselate. They also understand where sysadmins are lazy, or tend to be less able.

    All this means a better analogy is a bank. Lots of people walk in and out, and everyone knows where the treasure is. It's likely easy to find out how it's protected, too. But it's still hard to get at, because of well designed systems, monitoring and response procedures.

    By all means take your pick: dig a hole in your back garden (or hell, make it tricky, use someone else's garden) or put your money in the bank. But don't sell those two options as having the same security level in the real world.
  • Just look at the game Quake when it came out and after the source was released. Yes there was cheating when the source was closed, but there is a lot more cheating now that the source is open. There are ways to solve the problems if the source is open, but they would have been inefficient back in the days when Quake came out. Security is a tradeoff between performance and security. The higher the security the lower the performance of the product. In most products this is not a problem, but with a game that's designed to work over a modem, performance is critical.

    Let's take and example from Quake. Quake sends extra data to a user for prediction purposes, like where a users location is even though the user is not suppose to see that person. This is so that if the user doesn't get a packet update with the other users location in time, then the users client side can predict where the other user will be and if he is about to pop out of a corner and be visible. Sure this data can be taken away from the user to prevent cheating, but then the performance of the game drops critically. The users have to be kept unaware that this information is available to them.

    So what are some possible solutions? Well, not sending the data to the user is the best solution but then this will decrease performance. The users framerate and updates on what other people were doing would effectivly be limited by their ping time or modem speed. What about encrypting the data? Well, somewhere on the clients machine the user has to decrypt the data and perform calculations on it so the data is still available. How about hiding the data inside of a binary that the user does not have the source for and therefore does not know where to look for the data? This will prevent the user from finding the data while still allowing the client program access to the data, although only until the user gets smarts and finds where the data is hidden. This is a tradeoff between security and performance.

    So what is the ultimate security protection? Well just have keystrokes sent to a server, then have the server render an image, a "screenshot" of the game, and then send it to the users machine to be drawn. This way no information is sent to the user except for an image which only the user can interpret. Is this very secure? Yes, but the level of performance would be horribly low. Somewhere a tradeoff has to be made between what information a user is allowed to see to increase performance and what information stays hidden.

    That's just my splurge of the day, wonder if it makes sense...
  • A sterling example of an historical BOFH.

    Maybe I should get fitted for an eye patch? :)

    James - Arrrrrrrrrrrr!

  • Isn't this like what gangs charge in "their" neighborhoods? Protection money? If we give them $5k a year, they'll give us their information. Otherwise, we're screwed, and they'll decide whether or not they wish to come forward. And, as we've seen in the past, they will only come forward if there is some benefit to them.

    This quote is really the best part of the article. Does anyone -not- see the hypocrisy?

    "We have to put down our differences and our competitiveness and share more if we're going to prosper together," Mr. Copeland said. "If you're going to wall yourself off and not share, then you're going to be hurting. This will be a venue and a forum where we can start to build a level of trust."

    Um, aren't these companies going to wall themselves off and not share with the rest of us?
  • As long as nobody finds out, that is.

    And therein lies the rub. The basic presumption is if they don't know, they can't find out. Except, of course, that they can find out. And if they do, you won't know about it until after something happens. Screw external attacks, look internally. Are you 100% certain that everyone who knows those deep, dark secrets are completely and totally trustworthy??

    At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.

    Shouldn't they be running a more recent version of Apache?

    It is a helpful element in any security arragement, ever since Blackbeard buried his treasure in the Carribean.

    Blackbeard had other deterrents to those who knew his secrets, and might be tempted to steal 'is loot. Your basic shooting, stabbing, beating, keelhaulling, or walking the gang plank all have their place. But for the most part, we can't use any of those tools...sigh

    James

  • The thing I like about this arrangement, is that the organization they create is listed as a "non-profit" center.

    I'd like set up a non-profit company to develop my for-profit products, and then write off all my R&D as contributions to a non-profit organization.

  • Is there any connection, I wonder, between this cabal and Microsoft's recent decision to copyright its bug reports?

    Presumably, a security hole, classified as a bug, becomes property of the consortium and a value added commodity.

    This gives rise to a potentially revolutionary revenue stream for microsoft and friends...

  • Yes, I read the article before I posted (hence citing some of the companies involved as examples), and Clinton's approval makes it all the more disturbing. Let me share something with you.

    In our nation's (US's) history, there have been four presidents assasinated. Two were saints. Two were mediocre. But four out of soon to be forty-three is a miniscule percentage. It's barely over 9%. No wonder our government and our leaders have gotten out of touch with the American people. They're not beholden to us anymore, because they no longer have the fear of the masses beaten into them. That must change.

    There are four more days before Clinton leaves office. He must be shot today. And then we have to shoot Bush when we're done with Clinton. There must be blood on the floor tonight, if our government will ever learn to tread lightly on the liberties of man.

    This information-cartel outrage is only the latest volley in an ongoing war we have been fighting since we decapitated Louis XVI and drank from his blood at the guillotine (ingesting his divine sovereign mandate and becoming a true sovereign democracy). Our leaders are out of touch and out of their minds. The mail isn't being delivered anymore. The snake problem is at a fevered pitch. A grown man can't even walk to the corner store without getting passing stares from gawking subversives. Is that the kind of world we want to leave for our children?

    The information cartel must be killed tonight. Tonight. Tomorrow, we'll go after the guys who planned it. They must be brought to a pointy reckoning. Shudder them.
  • by FallLine ( 12211 ) on Tuesday January 16, 2001 @02:03PM (#503628)
    Before I begin, let me state that there is some merit to the Open Source security "process" (if you could call it that) AND there are legitimate concerns with companies merely shirking off ALL concern for security while depending 100% on so-called obscurity. That being said, I have real issue with going from "security through obscurity" is not a cure-all, to the Open Source mantra that "security through obscurity" has absolutely no merit. A couple key points that all too many just glaze over:

    First, the only way the Open Source security philosophy really works is if people ACTUALLY (as opposed to theoretically) sit down and read the code for security flaws in its entirity. I would argue that in a great many cases, no one even approaches this level. Because the Open Source community has very little centralization of effort, there is going to be a great deal of redundancy. In other words, even if you believe that 1000 security "experts" will spend some time reviewing the code, they may well be looking at the same piece of code (which in and of itself, can be a good thing), while leaving other pieces of code largely unscrutinized. Furthermore, I suspect that very few people truely give the code the time of day.

    Second, while Open Source makes it easier for white hats to find flaws, it also makes it easier for blackhats to find and exploit flaws. This is particularly relevant if, as I point out, the code is not getting the right kind of attention from white hats.

    Third, Closed Source can make it HARDER and DULLER to find flaws. Many people seem to assume that just because obscure products have been cracked, that there is absolutely no reedeeming value to it being closed. In other words, at any given moment in time, if we could some how have two parallel universes that would allow you to have the same piece of code (let's say the latest stable linux kernel with all patches applied) in Open Source and Closed Source at the same time, without knowledge leaking either way, most reasonable people would prefer the Closed Source option.

    Fourth, security flaws are found all the time in Open Source code projects. A lot of them are presumably stable pieces of code that have already been put into production. These systems get hacked REGULARLY. Now this isn't to say the same doesn't apply to closed source, but you can't ignore the problem either way.

    Fifth, many people constantly bring up the point "well if you just patch regularly...". While I agree that everyone SHOULD do this if possible, it's not always possible, and it's frequently not economical. If there is a piece of closed source code that hasn't had any published (or suspected) security flaws in 4 years of existence, while the competing Open Source alternatives have had many (constantly forcing their admins to patch), then that's a real issue for any competent admin.

    Sixth, it's entirely possible for a Closed Source company to do a full internal security audit of their code. It may not be perfect, but it's better than nothing. Although I fully realize that hardly anyone does this, it'd be a mistake to ignore this as an option. If a company can get _most_ of the (presumed) benefits of an Open Source security audit without the corresponding exposure of their source code to blackhats (or at least less "risk" of that), then that might be very good indeed.

    In summation, this is not nearly as black and white as people protray it. It comes down to numbers and many other unquantifiable elements. A simple philosophy is a not a one time cure-all. For instance, as I have alluded to, if there are very few white hats reviewing the code (say 50) and those white hats are mostly replicating their own work (say 15% efficiency) while allowing any black hat with proper monetary motivation to put the effort into cracking easy to read source code, then you might well be worse off. The same goes the other way around, if a software company, as all too many do, rush their product out with little to no review and depend entirely on obscurity, they might well use some routines that are well known security problems that can be easily searched for....

    The bottom line is that it is just as stupid to assume your carelessness will be automatically covered by "peer review" (or "Open Source") as it is to assume it will be covered by "obscurity".
  • Not only did I post it with my name, I meant every word of it. You are a coward and a hypocrite.
  • The RSA algorithm is not an obscure algorithm; every single detail of the algorithm is in the public domain, and a staggering amount of academic scholarship (the vast majority of which is also in the public domain) is available.

    If I pick 17 as one of my RSA primes, that doesn't change the algorithm. Okay, so I'm picking a stupid prime, but the algorithm is unchanged. If I pick a 300-decimal-digit prime, that doesn't change the algorithm, either.

    "Security through obscurity" means "as long as I don't tell you how it works, then the system is secure".

    Real security is "I'll tell you how it works, I'll tell you about all its known weaknesses, and I'll help you understand it inside and out--and it'll still work within its specified operational parameters."

    In the case of RSA, part of its specified operational parameters is that the private part of the keypair is kept secret.

    Where's the obscurity?

    (Sidebar: cracking RSA does not rely on the private prime being obscure. For a very long time it was conjectured that breaking RSA was dependent upon factoring an extremely large composite number into two primes, but the recent attacks against PKCS1, etc., show that it's possible to stage cryptanalytic attacks against RSA that don't involve factorization.

    RSA is based on three conjectures. One, that P!=NP. Two, that factorization is NP-complete. Three, that factorization is the only way to break RSA. Neither of the first two conjectures have been proven, and the third conjecture has been proven false.

    That said, RSA is still a well-trusted algorithm. The non-factorization attacks are well-known and fairly easy to avoid.)
  • At Americanwicca.com, we make sure that our site is utterly secure by refusing to release details.

    A lot is known simply from a cursory inspection--site hosted by Hypermart [hypermart.net] on elderly FreeBSD, running software that anybody can buy from e-classifieds [e-classifieds.net] (and privately audit for security holes), etc.

    Your site (assuming you in fact have anything to do with it) of course isn't utterly secure.

    People use security through obscurity all the time.
    But people who rely on security through obscurity to protect their networks are simply waiting for trouble to happen.
    Why are you using a fake nick, after all?
    Because you're childish and "elite"?
  • The difference is that, with Open Source, the good guys can find AND FIX the holes. With closed source, you're dependant on the good will of the companies.
  • I'll send $5 too, but only in CDN$... ;)

    Actually, what went through my mind first when I read, "Other technology firms will be able to join the alliance for $5,000 a year" was, "Gee, it's just like a fraternity. How nice it must be to pay to have friends."

    The second thing that went through my head was, "Guess this is a big boys' club only. $5000 a year to join isn't much if your total assets (or asses) are zillion$, but it effectively puts most small businesses out of the running...and they're probably the ones who need something like this organization the most...if something like this is truly needed."

    I don't in principle think it's necessarily a Bad Thing[TM], but even a kitchen implement in the wrong hands...

    ?!
  • RSA keys are not purely entropic--they possess a great deal of predictability, which is why the keys are so long. For instance, if you're using a 512-bit prime, you can be assured that bits 0 and 511 are set.

    If bit 0 is not set, then the number is evenly divisible by two, and it's not prime. If bit 511 is not set, then it's not a 512-bit prime (it's a 511-bit, or what-have-you).

    Right there I've predicted two bits, out of 512. With more advanced mathematical techniques you can discover more properties about the binary representation of prime numbers, which helps you winnow out even more possibilities.

    It's been widely conjectured that a 1024-bit RSA key is roughly commeasurate to about 128 bits of entropy. Of course, distilling entropic properties of asymmetric keys is more black art than formal science, so I generally err on the side of rampant paranoia and guesstimate a 1024-bit RSA key as roughly equal to an 80-bit key. Still plenty good for most purposes, but if you're worried about major governments, 2048-bit keys are appropriate.

    Moral of the story: asymmetric algorithm keys must possess a large degree of entropy to be useful, but the key itself is not one hundred percent random.
  • No difference.
    It isn't that hard to edit a binary to include a trojan as well.
    For example, you could find the part of /bin/login that calls crypt() and add a call to your own function before calling crypt(). (Patching crypt() is a long honored tradition, used for example in the old telnetd LD_PRELOAD bug.)

    If you doubt this I encourage you to take a look at for example fravia's site. (Use google.)
  • > No, you make sure your site is secure by locking down ports 21 and 23 for starters (telnet and mail). I know this because I just tried to telnet into
    > them to see if they were open. So if security-through-obscurity is so darned good, why do you need to take the additional step of locking down
    > your ports?

    Since you mention those two ports, out of curiosity, did the prompt identify the software running on those ports? (e.g., sendmail, postfix or exchange on port 23?)

    Another simple step to take is to make sure that your web server always returns a 404 error if someone looks for non-existent pages. (You'd be surprised how many web servers don't do this, & cheerfully identify the software running instead.)

    The reason I mention this is that I've seen it mentioned in several different places to disable self-identification of server software -- it's trivial to do for most of these applications, & it makes a cracker's job a bit more difficult.

    No, if you take these measures you can't unsubscribe from your favorite security mailling list & still sleep soundly at night. These steps will only slow down the determined cracker -- maybe enough so that you can catch the miscreant in action & foil him.

    Geoff
  • "{blah}...launch the nonprofit center, known as IT-ISAC."

    Just say "itty-sack"
  • If I left a piece of paper with the password for my NT box in the front lawn, I would not announce it either, dummy.

    A much better example is that I bought my house lock from Acme and Acme keeps private how the innards works, not revealing the fact they just figured out that all hairpins in the world work just fine if bent in a certain way.

  • This is one of the most common and most idiotic arguments against the increased security of open source. It just plain isn't true. It assumes that my ability to have the source code suddenly precludes any other method to finding security problems. Obviously such an argument is flawed.
    That is not what I meant at all. Perhaps my statement was a little confusing, but this is slashdot after all, we can't always review every last word. What I meant to say (and I still think is quite obvious) is that the only necessary, positive, and significant contribution to security that Open Source can make over and above what is offered by Closed Source, is the opportunity for wide "peer review". Most everyone would agree with that. What I do disagree with is the assumption that code DOES get reviewed properly.

    If I have the source code, I have every method of finding security holes available to closed source projects to me, IN ADDITIION TO the source code. If I find a problem, I can then study the pertinent code, report it to others that are interested, and even try to fix it myself. The fact that it's open gives me more options, and one extra avenue toward finding and closing security holes. It doesen't take away any of the traditional closed source methods.
    Yes, you have the opportunity to review the code, in addition to potentially reviewing the benefits of other people's security reviews. However, I feel that you, and many others, also overlook the countervailing side effects of it being open--its vastly increased exposure to blackhats through easier spotting of vulnerabilities and easier exploitation of them. In my opinion, if you're not looking at these two elements, the opportunity for peer review and the potential for blackhat review, as counteracting forces, then you're not fully grasping the situation.

    In other words, different situations can totally change the balance. In one situation, closed source may be much more appropriate, while in another open source will be. This is a view that is simply not taken by a large part of the community, thus I shall continue making my point.

    Let's just compare the time to fix most security problems between Microsoft products, and Linux. Under the closed source Microsoft products, most security problems are fixed in 'the next service pack', often the next service pack doesn't fix the problem. When a security problem is encountered in a Linux based system, it is often fixed within a couple of days. One example was a security hole found to be present in most Linux systems (I don't feel like looking it up right now), the person writing about it had intended to inform the Linux distributions, then wait a while to give them the opportunity to fix it before he posted to bugtraq. He was complainig because long before he could post to bugtraq, half of the Linux distributiors had already made a fix, and posted it on their website.
    It's hardly fair to draw conclusions about closed source software based on MS's products and then compare it to a very limited open source kernel; even the apple has more in common with the orange. Though you may disagree, Linux is JUST a kernel and a rather simple one at that (not necessarily a bad thing, but it's relevant to the question), what's more it's highly modularized. These things fundamentally alter any comparison.

    With Open Source software, I am not limited to when the person who owns the software decides to fix it, I can fix it myself if I have the skill. With closed source software, I am completely dependant on someone else.
    Sure, with closed source code you're completely dependent on the developers. However, you're overlooking, or not pointing out, other important factors. For instance, you're highly dependent on the developers anyways (be they open or closed). With, say, an RDBMS, your concerns extend not just to security but also data intergrity. So you're trusting them already; you're not entirely self-sufficient, no matter how smart, educated, skilled, or available you are. Furthermore, most closed source products are powered by a market driven mechanism. Any developer that alienates his consumers for too long with security concerns, faces their wrath, and hence, loss of profits. In addition, not all fixes are trivial. Few people have the time, skill, and ability to fix the problem for themselves. If the community cannot or does not fix the problems and you cannot, then you're up shit creek without a paddle. (Which is why the community's efforts are terribly relevant).

    One issue that many people ignore is the importance of the developer that actually writes the code. The original developer has a HUGE impact on the security of the code. If that developer is either malicious or careless, security problems WILL happen--patches are not necessarily adequate (both from a timing and an implimentation POV). I would argue that given the "Openess" of Open Source code, you are potentially exposed to the laziness (or maliciousness) of hundreds of developers. Though it is true that many closed source firms pay little attention to security during the release process, closed source firms can also hold their developers more responsible. It's not necessarily always better one way or the other, but it can (and does) come into play.

    I wouldn't say they even get one of the major benefits to Open Source. Open source allows many people to look at, and fix (always remember the and fix part it is very important) the code. Yes an internal code audit would turn up several of the security problems that get found by people pouring over an Open Source project, but still, the security problem remains until it is fixed, during which time the black hats can either find or exploit it.

    Under Open Source there exists the opportunity to have numerous code audits going on all the time by numerous groups.
    Yes, but as I have said numerous times, this doesn't necessarily happen. If it doesn't, it can have tremendous negative effects on the effective security of the product. I would argue that it often doesn't at all and when it does the quality is all over the map.

    What's more, much like sofware development in general, 99% of the work is performed by a relatively small group of "experts". (If you don't believe me, try reading bugtraq and other similar forums--you'll generally see the same people) In other words, I question the "wideness" of the review; if it's not wide, it hardly has an edge on properly reviewed closed source.

  • dear lord, /. sure brings in the odd ones, eh?
  • Port 25 is mail, 21 is ftp, 23 is the default telnet port

  • I don't see this as having a big impact on security. Mostly I expect they'll just be faxing around BugTraq posts. This is because most of the real vulnerabilities are not found by the software vendor but by users.

    I think this is more of a publicity thing. Over the past year or so there have been some pretty high-profile security problems that have made the software industry look pretty bad ('oh my, the mighty Microsoft got "hacked"'). I think they're just doing this so the general public thinks that they're taking security seriously. I doubt it will really change much about the companies' approach to security fixes.

    Of course, that's the technical end of things. I don't approve of the social message that sends out. I don't like the trend that the industry is taking with regard to openness and accountability. It goes back to the thing about how the quality of something goes up when it's being developed under public scrutiny (why I love Debian so much!).

    noah

  • I'm not going to defend the specific companies involved with this...Of the three mentioned up top, for example, all of them have been guilty of some degree of security through obscurity in the past.

    However, having a members-only club with sharing of information doesn't directly relate to security through obscurity. Saying that any closed source or hidden method of security is 'security through obscurity' just because its closed is a perversion of the term. Many closed systems actually have adequate security that wouldn't be compromised if the system were open.

  • I am curious how they determined the cost for theft of proprietary information. Also this seems like a drop in the bucket compared the time and money that anyone of these companies would be spending on security to begin with.
  • It's really an unclear decision to make, whether to fully disclose every security hole or to shut up about it until the hole is fixed (or forever, whichever comes first). Both sides have some good arguments justifying their case, but it is unclear which method results in the highest security.

    The point of the full disclosure folks is that once a hole is found, it will be exploited by those who know. Therefore it is necessary for everyone to be aware of these holes in order to create counter-measures aimed at closing the hole. Exposing all security hazards also has the side effect of forcing software houses to release a security patch more quickly. Since no security hole is safe from hackers, it makes no sense in trying to hide them from the public since the public (or at least the malicious) is probably already aware of it.

    The other side of the coin says that security holes should not be announced for the express reason of preventing massive exploitation of them. This line of reasoning has some solid evidence behind it. *Real* hackers with the ability to find these holes are few in number, but the script kiddies with virtually no skills whatsoever are legion. It is arguable that the damage caused by a few 'in the know' is far outdone by the damage of the kiddies with their point and click hacking devices. Likewise, by the time the exploits are known to places like Bugtraq and the various software houses, the hole has pretty much been well exploited by the discoverers. It then seems that hiding the exploits from the general public seems like quite the pragmatic thing to do.

    So which is it? Disclose every exploit openly or hide them until they are fixed? I don't know.

    Dancin Santa
  • Someone want to keep track of pledges and see if the /. community could pull this off? Maybe get Andover to pitch in a bit extra if we show enough support?

    Or maybe I'm just nuts... either way, it'd be damn funny (see ironic) to see /. & co. pull this off.

  • Imagine how secure your data would be if nobody knew where it was except you - you wouldn't need any expensive safes or firewalls then.

    Not unless script kiddies obtained some sort of "port scanning" software, of course.

    If I know I'm being trolled, and I respond anyway, does it make me more or less of an idiot?

  • What about accountability?
    If a security hole is discovered in a piece of software I guarantee you it will get fixed faster if the whole world knows its there. Companies have no incentive to fix holes unless they recieve pressure from outside. Your argument doesn't address that. This agreement doesn't address that.
    This agreement makes it easier for these corporations to sidestep public criticism of their insecure software. Can we expect these companies to act outside of their best interests and expend invaluable resources to fix software that no one else knows is broken? Of course NOT!!

    Fixing security holes takes programmers.
    Programmers take money.
    Corporations like to make money, not spend it.
    Therefore, security holes will not get fixed.
    It's that simple.
  • This does seem like an intelligent tactic - it's gotta be better than finding out about the latest exploit/virus/whatever your system is vunerable to from CNN. I think what most people are frustrated about is that if Microsoft was more honest concerning Windows flaws and weaknesses, many of these exploits wouldn't be such a huge problem.

    Then again, I suppose it isn't good business to admit that your primary email proggy (Outlook) is a bored script kiddies wetdream.

    What I want to know is how long do they think they can keep this info from the press? Leaks, Bad!

  • You forgot to mention one another point that an attacker would reply with:

    4. That sounds like a challenge! You're on.

    --
  • So let me see if I understand this correctly. All of the industry heavyweights are going to get together and secretly tell eachother AND THEIR COMPETITORS everything that's wrong with their products? I'll believe that when I see it.

    What will likely happen is that every bug that comes up will be seriously considered for political and economic fallout and they'll only allow the information that's relatively safe to them get out to this group. So, only the truely innocuous bugs will get dealt with and the big nasty ones will still be out there.

    And you know where the nasty bugs will get discussed? Bugtraq! :)

    I'd save my money if I was Cisco or Oracle or what have you. The only possible value in this is getting some dirty laundry on your competitors and that's only if their dumb enough to tell you in the first place (and if they are that dumb they'll be dead in a couple years anyhow).


    ---

  • You could raise $5000 in no time now it would be fun to see if they would let Andover join (I assume it would have to be a corp.) I'm betting they won't. I would also be very shocked if whoever joined did not have to sign a stack of NDAs about a foot tall. I would donate a couple of bucks just to see them try to come up with a reason to deny us. The fact that the government (and while you might question their motives they have people who know enough about security to know that this is not a good thing) is supporting this scares me. I'm not sure why it just scares me on a gut level.
  • I find it funny that this (mentioned in the article) was all brought about by President Clinton suggesting that the tech industry create an exclusive "members only" club like this to "promote security". (Of course, this is also the point I stopped reading. Any tech company that takes the advice of a politician....left to your imagination.)

    The funny thing is that they are trying to emulate the spirit of open source while still remaining closed. They want to "share" information that could be of great help to them, but they don't want to share that information with the public at large. Something about that just strikes me wrong. Their idea is that they are protecting us (the public) and them from more debilitating attacks, but isn't this entire idea flawed? As the poster I am responding to said, security through obscurity just doesn't seem to work.

    Granted, open source isn't perfect. But it seems to do the job pretty well. And apparently the businesses involved in the creation of this new "security" group is aware that an open policy can do some good. But their idea that only they (as in the special multi-national interests/corps) should have this "open" information seems kind of a deterrant to the idea of "open" information.

    Opening up your information to a bunch of like-minded individuals in similar situations probably isn't going to solve underlying problems any more quickly. It's the fact that such hugely diverse people can look at the same problem from so many angles that open source projects can solve security problems quickly (when they need to). Letting someone with a fresh and possibly completely new way of looking at something is always good for any project.

    But, another way of looking at this is that they are going out of their way to adapt as many open source ideas as they can without truly admitting that open source ideas work. Maybe eventually someone there will get a clue that if opening things up amongst the companies was good, perhaps opening up further would be better. I don't really see this as a conspiracy. But I think it's kind of funny. Like one of the AC's in this thread said, they've set up their own little closed source version of the OSS community. And the AC is right, it is kind of cute, in an odd way.

  • You have to be a Cisco employee to view some of the bugs
  • As with so many things, running to either extreme won't solve the problem. Yes, full disclosure will make sure a hole is plugged sooner. In the interim, however, all the kiddie crackers in the world can exploit it. And people with a proprietary OS like Windoze are stuck until M$ can release a patch (which will be significantly after the equivalent patches for OSS are out -- ESR proved that the bazaar can react faster than the cathedral.)

    My thought: have a consortium analogous to this, but include every sysadmin for every major and minor company. (Perhaps even mailing lists for specific issues... eg. HTTP vulnerabilites get mailed to the Apache group, MS IIS team, etc., but not to, for instance, Intel.)

    Of course, with this many people involved, the news will leak out anyway. But it will at least give the people who are going to fix the problem an insider's edge -- so that, as the story hits the front page of the newspaper, the OSS guys are already posting a patch.

    It's worth a try, anyway.
  • In addition to the NDAs, it's very likely that some members have veto power over new members. A prospect who was seen to be joining in bad faith probably would get their $5k sent back to them.
  • It works fine in Mozilla 0.7.

    I started using it recently at work, at it's muucchh better than previous releases. Still not quite ready for prime time, but damn close.
  • Yes, with the employees of Oracle (who have no time) and the employees of Microsoft (who have no time and no skills) looking for stuff that hackers already know about things are gunna be more secure. Please.. Whilst the Cartel is sitting on discoverys so they can take their time fixing the damn things the rest of the world is going to be doing the same thing it already does ie, find bugs and report them. This is just a big excuse to delay releasing patches.
  • The topic was also covered by Reuters [yahoo.com].

    My, the original link really made my eyes hurt, even with Junkbuster [junkbuster.com].

  • You just lost your house key in your front yard on your way out to go to work befoe you locked the front door. Which would you do: 1) get a bull horn and announce it to the neighborhood and leave for work or 2) keep it quiet, lock the front dor from the inside and leave out of the back for work quietly and llok for it when you get home. Nuf said people. Stop the black-helicopter conspiracy theories and the if-it-ain't-open-source-its-crap knee jerks for a second and think think think. These companies produce virus alerts and security alerts for the public and nothing they said indicates they won't continue that effort. Their alliance is for two purpose: 1) they are just going to communicate more quickly among themselves next time. The was no formal network response to LoveBug and is cost many people dearly. I know my former firm's mail servers were shut down for two days. And 2) they are going to keep little-known vulnerbilities quiet while they fix them--a good thing (see analogy at start of message). Relax--SlashDot community members must drink too much coffee. Chill out.
  • microsoft would be scared to let a security audit of the code be performed by a 3rd party.

    Actually Microsoft, Sun and many other closed source vendors do have their code verified by 3rd Parties in order to gain ITSEC [itsec.gov.uk] (now CC) evaluations.

  • Yeah, you dont' get many messages sent though port 23, unless you are one of those skript kiddies who leaves messages as the filenames in anonymous ftp directories!

    But really, your point about the poster being one with a grudge, and directing you to the target of the grudge. On the other hand, this illustrates where security through obscurity does work - misdirection. Sure, the first time someone looks at your hand instead of what you are pointing at, you are a sitting duck, but all the time it does work is time that no one scruntinizes the real security that's in the hat.

    +1 Mixed Metaphors

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.

  • First, the only way the Open Source security philosophy really works is if people ACTUALLY (as opposed to theoretically) sit down and read the code for security flaws in its entirity. I would argue that in a great many cases, no one even approaches this level. Because the Open Source community has very little centralization of effort, there is going to be a great deal of redundancy. In other words, even if you believe that 1000 security "experts" will spend some time reviewing the code, they may well be looking at the same piece of code (which in and of itself, can be a good thing), while leaving other pieces of code largely unscrutinized. Furthermore, I suspect that very few people truely give the code the time of day.

    What this probably protects from the most is security holes introduced intentionally by the authors, whether that is sanctioned by the vendor or not. Take the case of so many systems with backdoor passwords. Open Source exposes this, if someone were to be stupid enough to do it.

    Second, while Open Source makes it easier for white hats to find flaws, it also makes it easier for blackhats to find and exploit flaws. This is particularly relevant if, as I point out, the code is not getting the right kind of attention from white hats.

    Everything is easier. But whether the proportion favors white hats more than black hats depends on how many of them are looking. Consider that most users of exploits do so with exploit tools they download, as opposed to discover and code for themselves. I do suspect the whitehats way outnumber the blackhats.

    Third, Closed Source can make it HARDER and DULLER to find flaws. Many people seem to assume that just because obscure products have been cracked, that there is absolutely no reedeeming value to it being closed. In other words, at any given moment in time, if we could some how have two parallel universes that would allow you to have the same piece of code (let's say the latest stable linux kernel with all patches applied) in Open Source and Closed Source at the same time, without knowledge leaking either way, most reasonable people would prefer the Closed Source option.

    I would not make that choice. Of course exploits are harder to find with closed source. But this just results in a greater time delay before they are discovered. The number of blackhats is reduced somewhat while the number of whitehats is reduced radically, with closed source.

    Fourth, security flaws are found all the time in Open Source code projects. A lot of them are presumably stable pieces of code that have already been put into production. These systems get hacked REGULARLY. Now this isn't to say the same doesn't apply to closed source, but you can't ignore the problem either way.

    Is it being ignored? I don't think so.

    Fifth, many people constantly bring up the point "well if you just patch regularly...". While I agree that everyone SHOULD do this if possible, it's not always possible, and it's frequently not economical. If there is a piece of closed source code that hasn't had any published (or suspected) security flaws in 4 years of existence, while the competing Open Source alternatives have had many (constantly forcing their admins to patch), then that's a real issue for any competent admin.

    This seems to me to be a good argument for closed source. There is a time dampening effect by closed source that makes it possible for admins to avoid doing the patching. But I've found that with a lot of other good practices, this isn't that much of a difference.

    Sixth, it's entirely possible for a Closed Source company to do a full internal security audit of their code. It may not be perfect, but it's better than nothing. Although I fully realize that hardly anyone does this, it'd be a mistake to ignore this as an option. If a company can get _most_ of the (presumed) benefits of an Open Source security audit without the corresponding exposure of their source code to blackhats (or at least less "risk" of that), then that might be very good indeed.

    It's also entirely possible for a Closed Source company to not do an audit at all, or do bad one, or hire an untrustable auditor to do it. Open Source gives the end user the option to choose from available audits or hire their own. Granted, the choices are few, but in theory, open does open this possibility.

    In summation, this is not nearly as black and white as people protray it. It comes down to numbers and many other unquantifiable elements. A simple philosophy is a not a one time cure-all. For instance, as I have alluded to, if there are very few white hats reviewing the code (say 50) and those white hats are mostly replicating their own work (say 15% efficiency) while allowing any black hat with proper monetary motivation to put the effort into cracking easy to read source code, then you might well be worse off. The same goes the other way around, if a software company, as all too many do, rush their product out with little to no review and depend entirely on obscurity, they might well use some routines that are well known security problems that can be easily searched for....

    I generally have distrust for commercial software. The primary reason is because of the time pressure to "get it out the door" which ends up sacrificing things that need to be done, but get put off (often forever) in order to meet the deadline which marketing has already established.

    The bottom line is that it is just as stupid to assume your carelessness will be automatically covered by "peer review" (or "Open Source") as it is to assume it will be covered by "obscurity".

    I would agree. Being open in no way makes something more secure. It provides the opportunity. The opportunity still has to be taken advantage of, and that isn't always done. And there are some totally lousy programs out there not even worth spending the time to audit.

  • You are one scary dude. Betchya own some black metal albums too. :) Nuff said.
  • Here's a couple clues for you all who freaked out about this, the world doesn't revolve around security announcements and none of these companies are obligated to tell anyone jack-shit about their security problems. Its not like these companies were quick to share security information with anyone before. People will continue to find security holes in products, both open source and closed, and big companies will continue to drag their feet fixing then. No big change here.

    If your really looking for a corporate group to fear try the World Economic Forum. A thousand of the most powerful CEOs, bankers, politicians, media moguls, etc. meet in Davos Switzerland every year to decide global economic policy, i.e. how to increase the flow of money from the lower and middle class to the upper class.

    And as for the poster who suggested that Bill Clinton should be shot over this why don't you try moving to the Congo, the president was just shot and killed there today. Let us know in a year or two whether you enjoy living in a country where policy is made with a gun rather than through free and open debate.
  • The 19 board members, scheduled to meet Tuesday for the first time, eventually will determine how much of that information to share with other industries or the U.S. government.

    Add to this that there are more private security personal than police in the USA already.
  • So now, when a hole in Oracle is discovered they can immediately put it in the M$ SQL server brochures to make their product look better :-)
  • What this probably protects from the most is security holes introduced intentionally by the authors, whether that is sanctioned by the vendor or not. Take the case of so many systems with backdoor passwords. Open Source exposes this, if someone were to be stupid enough to do it.
    Does it though? Sure, in any popular piece of Open Source code it's unlikely that I could hide a backdoor trigged by a trivial string like, say, "MY VOICE IS MY PASSWORD", where that is entirely possible with unreviewed closed source code. However, I'd argue that the existence of a significant number of security flaws in many Open SOurce projects is proof that a backdoor CAN be hidden, if so desired. Afterall, if a backdoor can be added and remain in closed source code entirely as the product of an accident, what is to stop a coder from doing the exact same thing, only intentionally? If they're smart enough to code that piece of code, they're smart enough to plant a backdoor in there that can escape detection. Whether this backdoor requires a machine code overflow, a race condition, or a simple string is virtually irrelevant. What's more, I'd assert that while the bar may be raised normally for coding a backdoor, the opportunity for the hacker to insert MALICIOUS code increases by at least a factor of 10. In addition, entirely beyond the scope of "peer review" when these products are actually consumed by the user, it's rare that the consumer will himself be able to CHECK all the code. The vast majority of users either install with ./config scripts and such automatically or they use pre-compiled binaries/rpms--which are definitely vulnerable.

    I do suspect the whitehats way outnumber the blackhats.
    Maybe, maybe not. That's an important assumption that the open source community makes and tends to pay little attention to. I'm not saying it IS, but it's quite possible that there is more incentive to be a blackhat (ethics aside) then a whitehat on certain projects. Is there any real empirical evidence one way or the other? Can you say with certainty that the black hat community has discovered a significant number of flaws long before the white hat community with closed source software? It seems to me that they're quite rare in both cases, at least in terms of what is publicised. And if it's not publicised, we have a hard time knowing.

    I would not make that choice. Of course exploits are harder to find with closed source. But this just results in a greater time delay before they are discovered. The number of blackhats is reduced somewhat while the number of whitehats is reduced radically, with closed source.
    All of this comes down to numbers, which none of us really has. If it's a matter of mere delay and open source is so superior, then we'd expect there to be a clear difference empirically. I'd argue that this difference is not nearly as wide as many suggest, and often times is less than favorable for Open Source.

    Anyways, I dont have time to finish this argument off, I've got a bunch of work to finish off. Maybe later...
  • I have to wonder, tho, if the original poster has a grudge against americanwiccan.com? Call me cynical, but I suspect something like that...

    No, I'll call you ragingly paranoid--which is good, that's a compliment, I like that. :)

    At any rate, I sent off mail to the people over at Americanwicca.com, telling them that they might be the target of malicious attacks as the result of that Slashdot post. So we've given them some warning, which is about all we can do in this situation.
  • Every gardener knows that kids will walk through a hedge when it's in their way. Look at the hedges in the nearest park and the paths worn through them.

    Translate to computerese to fit your desires.

  • "Three can keep a secret if two of them are dead." -- Benjamin Franklin

    Good point. Even if these organizations do attempt to close ranks, it only takes one employee with access to the reports and willingness to leak them to ensure that outside parties "discover" the same holes that the club members do.

  • There is no such thing as a productive meeting with 19 entities. It will be on this order:

    (Mr. Jones) uh... uh... what's port 23?
    (Mr. Smith) (inaudible) Oh that's the Frabazz port.
    (Mr. Gates) When I was writing DOS...(inaudible) any ports at all.
    (Mr. Wilson) um...fscken kiddies

    And so on.

    What will come of this is blathersgate. These fellows will have a marvelous time pulling one anothers' puds and releasing statements. Nothing productive will emerge.

  • by scotpurl ( 28825 ) on Tuesday January 16, 2001 @01:05PM (#503673)
    I figure if a bunch of us throw in a bit of money, Slashdot could join this exclusive club, and then we'd get access to the reports on all the unpublished, undiscovered holes and bugs that the marketroids are hiding from us.

    So, I pledge $5/year for this endeavour.
  • Some evil haxor has hacked MSNBC so that it won't work with netscape.

  • So the big players share their security holes with each other? Will this stop DDOS attacks? Perhaps, if CISCO puts a patch on their routers to Prevent fragmented packets from hitting Microsoft NT boxes (do they allready? they should!), the world may be a better place. But, what about tiny security bugs that no one knows about but the industry big players? "Oh that bug? I'll fix it next week sometime." Meanwhile, it's discovered by a clever 14 year old and half the Internet's servers crash. They only way to improve security is to 'Put [security bugs] on the front page of every major newspaper for any hacker to see'. This way, THEY GET FIXED. No more committees. No more talk. No more PR BS. PUBLISH IT. FIX IT.
  • The only way to true security is through open source. You can't hide your security flaws behinde a binary and expect people not to find them. Open source lets the entire community find security flaws - and patch them. The point is not to create a product which has no apparent security flaws, it is to make a product with no security flaws period.

    It has been proven time and time again in life and with computers (especially NT) that trying to hide security holes just doesn't work.

    So, keep your code open and let others find it's flaws.
  • Do you sleep well at night?

    I sure wouldn't, knowing that anyone who decides to get information about my website would be able to crack it.

    Lets compare this to a game of hide and seek.
    If you have played hide and seek, you know, that no matter where you hide, you will be found. Unless you are in a place that the seeker cannot get access too, you will eventually be found.
  • Ys... in your example, in the physical world, if you can't find it you cna't have it.

    That's also it's weakness. The main point is that there is no way that you would even know. As with most security holes in closed programs, no one knew... or really had the capability to know until that one person found it.

    It may have taken a while but things like "Netscape engineers are weenies" do get found.

  • Let's take a look at this page:

    http://www.w3.org/Consortium/Prospectus/Joining [w3.org]

    Hmm, looks like joining W3C costs 50 grand a year for a company, nearly ten times the amount proposed by this security group. Non-profit/educational access costs $5k annually, the same price as this security group. How come nobody accuses W3C of being an "information cartel"? Simple... it's not, and neither is this group. $5k per year is nothing for a company that is interested in security issues, even a small company.

  • by Shoeboy ( 16224 ) on Tuesday January 16, 2001 @01:20PM (#503680) Homepage
    How is keeping a vulnerability secret until you've got a fix for it "security through obscurity?" There's a big difference between releasing source and releasing vulnerabilities. Releasing vulnerabilities only guarantees that they'll be exploited.
    Even the mighty Linux community sometimes keeps vulnerabilities secret until a fix is released.
    What makes this security through obscurity rather than good security practices?
    Won't some strong virile slashbot please explain it to timid pert little me?
    --Shoeboy
  • We should be fair, and be unbiased. There is nothing wrong with security through obscurity. It is a helpful element in any security arragement, ever since Blackbeard buried his treasure in the Carribean. Thanks!

    I am being fair and unbiased. Security through obscurity never works.

    Read a few books on cryptography, and then come back with a clue.

    Somebody as naive as you should NOT be using the ship name of an AI several billion times smarter.

    If Banks were dead, he would be turning over in his grave.

    PS. Nice Troll.
  • IIRC, Senator Lieberman endorsed their work, and asked them to continue doing what they do best. This new group is a reactive measure, spouting the same tired old tripe: posting security issues on the net encourages hackers. This is complete BS. Hackers will find a way to hack, with, or without L0pht [lopht.com], and with or without the "IT-ISAC".
  • Okay, so now if there's a major security flaw in something these people share it amoung themselves then give out a 'product update' to their customers while glossing over the details so that the users of the software are completly unaware that there was a major flaw/they could have already been comprimised/what crappy software they are running.

    Imagine what it would be like if the whole outlook executing any old VBscript(not that i have to worry with the Os I run) but the public should know about these things, it just gives corporations another way to cover shit up,... next they'll be forming their own government ;)

  • A sterling example of an historical BOFH.

    Blackbeard had other deterrents to those who knew his secrets, and might be tempted to steal 'is loot. Your basic shooting, stabbing, beating, keelhaulling, or walking the gang plank all have their place.

    --
  • by nyet ( 19118 ) on Tuesday January 16, 2001 @01:43PM (#503685) Homepage
    Except there is no incentive to fix a vulnerability "right" like a public leak of the details.

    Trust me. I worked for a company that has been featured on BugTraq once or twice. If not for BugTraq, our "fixes" of the vulnerabilites would have been limited to the simple work-arounds that our clients wanted. The holes would NOT have been closed fully; its just too much work.
  • Considering this was brought on by the government (President Clinton to be exact, read the article) I doubt too many government officials are going to oppose it. Even if they get loads of mail over it. Unless those loads of mail contain as many dollars as the kick-backs and other money they recieve from the players involved in this move, I really don't think anything will be done to prevent it. Not to mention that the government seems to really be working with the big multi-nationals to make them bigger and more consolidated. The reasons? Well, if you want the conspiracy reason, probably because a few sources of information are much easier to regulate (i.e. controll) than large numbers of smaller sources of information. One large corp (supported and enhanced by the state) would be much easier to watch than the multiple small corps that exist today. Not that government usually watches all that closely. Unless, like MS, the corp in question gets a little too greedy and leaves the government out of one or more of its schemes.

    Most corps have caught on. Work with and through the government. Don't work against them. Maybe MS has finally adopted this idea too. With Gates out of the real lead, perhaps we will see them making some more smart moves.

    But seriously, this probably isn't that big of a deal. Just an attempt to 'open up' on what they consider industry secrets with others within the industry. Unless they start colluding on prices (price fixing) and features (you offer this, I offer that and they have to purchase both), I don't think the government could intervene even if they wanted to. But I could be wrong on that. My legal interpretations occasionally are questionable.

  • No, you make sure your site is secure by locking down ports 21 and 23 for starters (telnet and mail).

    *cough*ftp*cough*telnet*

    I need some *cough*smtp=mail=25*cough* medicine.

    But your points are well taken...I have to wonder, tho, if the original poster has a grudge against americanwiccan.com? Call me cynical, but I suspect something like that...

    James - I summon the unholy demons of apathy, sarcasm and cynicism!

  • It does come down to numbers, and in my experience, open source software like Linux fares a lot better than Windows when it comes to security. What the reason for that is speculative; it is probably considerably more intangible and indirect than who does security reviews on the code when.

    One reason I suspect that close source software is worse off when it comes to security is that a lot of the security risks associated with closed source software like Windows is ill thought out hooks for future enhancements and occasional deliberate back doors; that's the kind of stuff developers catch on an open source project when they look at the daily check-ins. Another is simply that companies like Microsoft are not forced through exposure to their customers to have any consistent coding standards or conventions; for example, their programmers can go on merrily using "gets", even though in an open source project, silly bugs like that would be caught very quickly.

  • I can see why you say that, but I'd personally hesitate over calling it `good', rather than `not quite as bad as it could be'.

    In the example given, the solution is more likely to make life faster so you can dump the whole frame down to people, or guarantee that they won't drop a packet in time. Kill the problem at source, don't work around it.

    Ever wondered how ssh and gpg manage to be secure if the sources are available, the private key passphrase and data gets stored in memory?
    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • Sadly, I bet it's not just $5k. It's probably also signatures on a legal document promising not to disclose this information to 3rd parties not signatory to the agreement.
  • It's really an unclear decision to make, whether to fully disclose every security hole or to shut up about it until the hole is fixed (or forever, whichever comes first).

    Where's the incentive for a corporation to fix a security hole if they know that they can effectively keep knowledge of the existence of that hole a secret? Fixing problems costs money, covering something up is (usually) easier (i.e. cheaper) if you can catch the problem before knowledge of it grows out of hand.

    Your point about script kiddies is well taken, however you have to admit that nothing motivates a corporation to fix a problem more than public attention on that problem.

    My opinion (for whatever it's worth), is that attempting to keep knowledge of flaws in your product a secret is self-serving and unethical. At the very least, even if you don't have a fix for the problem, your customers deserve to know that the problem exists and if there is any way they can work around it. The corporations are *supposed* to be in business to serve their customers, not themselves.


    --

  • Every sysadmin knows that script-kiddies will crack their way in through a hidden backdoor/bug/hole/feature when it's 1337 to do so. Look at the various Microsoft products in the nearest Bugtraq digest and the threads proliferating about them.

    --
  • What's not working? My Communicator 4.73 Solaris implementation seems to work ok there.

    url: http://www.msnbc.com/news/default.asp

  • Damn length limit. Anyway, what the hell is so damn bad with security-through-obscurity? Why is it conventional geek wisdom that it is "really no security at all"? When items of great value are transported, how is it handled? They have great physical security in the armoured truck, but they also use obscurity, in that they don't advertise the time they will be moving in the New York Times. In fact they don't even tell the driver and others until the day they move, and how the person actually carrying the item may not be the guy with the briefcase handcuffed to his wrist, but could be any one of the other guards. You need to KNOW who is carrying it to steal it. You want as few people to know this info as possible.

    I have always used the analogy that crytography is like a safe, it limits pysical access to the data/material. But you don't put your safe in the middle of a room, you hide it in a closet or in the floor, etc. This provides another layer of protection, of varying efectiveness.

    Some famous bank robber was asked why he robbed banks, he replied "because that is were the money is." Even though this was a humorous remark, it makes my point that because everyone knows that banks have money, that is where they go to get it. But what if you coudn't tell a bank from any other building? Then you would have to find out which damn building to rob before you could actually rob it. This ends my rant.
  • The problem with security by obscurity can perhaps be understood best through your reference to Blackbeard's treasure. Blackbeard buried his treasure with the thought that if his enemies couldn't find it, they couldn't have it. The problem with relying on this as a security measure is simple: if they can find it, they can have it -- you don't have any other way to protect it -- and one fine day when you go back to reclaim your treasure you discover some elderly guy from Florida with a metal detector has waddled off with your ill-gotten gains. Yo, ho, ho, and a bottle of rum.

    The situation is much worse with respect to the internet, in which there is a small (?) army of script kiddies, all armed with metal detectors and pickaxes, randomly digging holes all over the place for the sheer destructive hell of it, and in which you've conveniently placed a sign (your URL) on top of your treasure. The question isn't whether one of them's going to find the treasure, it's how far will they have to dig and will they be able to break the lock on the treasure chest when they get down there.
  • by IntelliTubbie ( 29947 ) on Tuesday January 16, 2001 @07:32PM (#503696)
    ... they're colluding to fix output and prices. Laws against collusion and cartels are NOT made to prevent corporations from "patting each others' backs and shunning the upcoming little guy." They're made to prevent producers from splitting the market by limiting output, creating shortages, high profits, and above-equilibrium prices (e.g. OPEC). Unless this is happening (and I doubt it is), this isn't any different from any other industry group, such as the collective of milk producers that pays for those clever "got milk?" ads.

    Cheers,
    IT
  • Reminds me of of the US raid on Tehran. The special-warfare troopers were out in the middle of the desert, in a spot so remote nobody would be there looking for them... and they got discovered by a busload of people who stumbled across the area by virtue of getting lost.

    Moral of the story: security through obscurity doesn't work. It's a numbers game, a calculated risk, and the risk involved is far higher than other more proactive forms of security.

    I don't follow your logic, you're saying that since you know one story where the govt failed to hide their operation, then security through obscurity doesn't work. If you can't think of more than a handful of those stories for every war we've had, then the govt obviously has had far more success than failure with their technique.

    Also, if a single example proves something to be a worthless concept, then security without obscurity has also had plenty of its shares of defeat.

    Would you be willing to do all your online banking if your bank told you, "We don't bother to encrypt your financial records or firewall our system from malicious hackers--but don't worry! All the data is kept on a URL so obscure nobody will ever come across it!"

    Security through obscurity doesn't nessesarily mean that their security IS obscurity. They would have regular security measures in place, it's just that they wouldn't release exactly what they are.

    2. "We don't use any security measures to speak of"

    Same thing as above, they're not saying that their obscurity is their only security, only that they believe obscurity enhances it.

  • There have been plenty of flaws in Microsoft products that took the company a few months to publicly acknowledge, such as that nasty one where people could execute code using a buffer overflow in Outlook.. I guess they can use this secret society to communicate with partners about flaws without admitting their guilt to the world.

    --
  • From the article: "THE OVERRIDING GOAL is to protect ourselves from cyber-hazards, whether they be deliberate attempts or accidental events," said Guy Copeland of Computer Sciences Corp.

    An accidental event?! I can see it now: "Whoa, what was that? Did I just overflow a buffer or something? What the fsck is that root shell doing there????"

    --
  • That wasn't a troll. I was pointing out the flaw in the logic of the poster.
  • So now these large corporations are going to be sharing vulnerabilities with each other. I don't know about everyone else, but I trust Microsoft almost as much as I trust the 12 year old down the street trying to infect users with back orifice.

    These companies are composed of people, people who could leak these newly found vulnerabilities to the script kiddies anyways, or use the vulnerabilities themselves.
  • Unless the IT-ISAC can somehow contain such technical experts, the holes in their system will continue to be an open book.

    Ok. Done...

    IT-ISAC Reporting Society: Join our reporting society and earn $400 cash for any new security exploit that you find and report! Terms & Conditions: ..... blah blah ..... 5(g) Reporter agrees to refrain from disclosing to any third party and refrain from publishing, communicating, transmitting, or posting the Exploit, in any manner, other than as provided above in 2(a) Reporting Procedure. ..... blah blah .....

    If you've found something, unless you have a strong personal interest in free security information, why wouldn't you want to make a few bucks?

  • I have the strong impression that all those coalitions will make things go worse.
    They will understand that, even if it won't improve the actual situation, a "security trough obscurity" environment will at least give them some sense of power.
    They will put aside all rivalities and join together to create at least an alliance against a common enemy: Insecurity. But how will they understand that the great majority of the dangers involving security are due to human incompetence?
    Will they virtually stand up against idiocy? Will they fire their own emplyers because they can be the weakest point of their network?
    Or will they just cover this up and create a virtual enemy, pumping the figure of the 'oh-my-god-it's-scaring' so-called hacker?

    Now they are rivals, but soon they'll be together (as who knows how many other companies) against freedom of knowledge, against the fact that the human being must learn trough its mistakes, and so on.
    My only hope? That they'll soon break this alliance, because there's no such thing as a common enemy, and if they won't understand, they'll just fight against Windmills until they'll be tired. They'll create smaller and internal alliances, they'll fight each other assumptions, and on the long run nothing will change.
    Perhaps I'm hoping too much..

    As the Latin said, "break up and rule": The 19 founders represent some of the industry's largest firms, but they come with historic rivalries. Cisco and Nortel Networks compete bitterly in sales of computer-networking hardware. Microsoft was found to have violated antitrust laws to influence contracts with AT&T and IBM; Oracle has admitted to hiring private investigators to dig through the trash of groups supportive of Microsoft. Can these companies, in an industry known for unusually aggressive executives, ever trust each other?"
  • by Chuck Flynn ( 265247 ) on Tuesday January 16, 2001 @01:09PM (#503704)
    This is exactly the sort of thing antitrust laws are intended to prevent: collusion among market dominators, patting each others' backs and shunning the upcoming little guy. If you're not a major conglomerate like Oracle or Microsoft (much less AT&T and others), you can't possibly break into this information cartel. Don't people understand that information is the currency of the new age?

    Having a cartel like this is not only unnecessary; it's plain wrong. It simulatneously flies in the face of libertarian notions of self-help and of liberal notions of the omnipotent government who can protect citizens corporations on its own. Like so many areas of our economy, things were just fine until the corporations decided to start merging into one giant monopolistic hairball. I urge you all to write your congressmen and senators. This must be put to a stop.
  • by Shoeboy ( 16224 ) on Tuesday January 16, 2001 @01:09PM (#503705) Homepage
    Members that discover a new cyber-threat -- a new strain of virus or a break-in method that foils existing electronic defenses -- will be able to send detailed warnings to the rest of the group via e-mail, telephone, fax and pagers.

    I wish I had $750,000 dollars to sink into a non-profit center so that I could email, telephone, fax and/or page my friends when something important happened.
    *Sigh*
    Too bad only big buisness has these capabilities. I guess I'll go feed my carrier pigeons now.
    --Shoeboy
  • "Tech firms team up against hackers"

    With the current boom in open-software products and the increased visibilty to ordinairy computer users some of the Industries (monopolistic computer firms) decided to team up to be able to tackle these problems. "Linux is getting too big and these hackers are causing us way to many losses" said one OS rep (who then took Jobs style approach and started cursing!).....

    ohhhh wait wait... stop the presses, thats the wrong story... this ones about crackers......
  • I noticed they don't have a website yet (or didn't publish it anyway) - gee I wonder why - because it would become target #1 for the hacker community?

    I can see the news story now "The Information Technology Information Sharing and Analysis Center website, used to share vital security information among members including Micro$oft, Oracle, Inhell, and more, has been shutdown after it was discovered that hackers had broken into it months ago and had replaced the real security and hacker info with false information making it even easier to gain access to systems from these companies"

  • by autocracy ( 192714 ) <slashdot2007@sto ... .com minus berry> on Tuesday January 16, 2001 @01:10PM (#503708) Homepage
    Based on what I can tell from the report, this "members only" group sends warnings only among its own. That means that if one of these companies finds this nasty virus, all the other companies find out but we don't. When you look at the list of companies that have joined, you'll note that most of the companies have something to gain from knowing about such a virus before anyonne else. Take for example Symantec who makes antivirus programs, and VeriSign - who will ineveiteably bring up the "if you signed all your messages with our keys, then people would know it wasn't from you because you didn't sign it" junk. That in itself may be a good thing (encouraging crypto), but they'll find a way to twist facts so that only VeriSign gains from such a thing. Don't tell me know either: these companies are run by CEOs that worry more about how fat their wallets are than anything else.

    Another way this is bad: we have CERTs for a reason - to deal with this kind of thing. By forming this "coalition", they're further fragmenting the system of disaster recovery. CERT.org [cert.org] was created some time ago just for things like this, and it doesn't cost $5k a year to get warnings. It's free.

    Propaganda is the best term for this, and marketing is a close runner up. If they really want to team up and help stop attacks on computer systems, they can work with everyone else instead of creating a members-only club.

    My karma's bigger than yours!

  • Based on what I can tell from the report, this "members only" group sends warnings only among its own. That means that if one of these companies finds this nasty virus, all the other companies find out but we don't.

    Yes, that's true, but I have doubts about how long this group will LAST. Let me explain by first quoting from the article:

    The 19 founders represent some of the industry's largest firms, but they come with historic rivalries. Cisco and Nortel Networks compete bitterly in sales of computer-networking hardware. Microsoft was found to have violated antitrust laws to influence contracts with AT&T and IBM; Oracle has admitted to hiring private investigators to dig through the trash of groups supportive of Microsoft. Can these companies, in an industry known for unusually aggressive executives, ever trust each other?

    Distrust and fear will likely keep this group from taking off.

    If a company with billions of dollars in revenue had some inside dirt on one of its multi-billion dollar competitors... "Hey! With this info, we could bring X to their knees. Nah, we couldn't do THAT!" Ya, right.

    As soon as it even APPEARS that something like this has happened, the whole group would likely begin to collapse from distrust and fear of having the same done unto them. A little less is revealed, and then a little less. Heck the whole idea for the group is to keep information from others they don't trust... just how long can they trust their greatest competitors?

  • Since all the "What is CERT for?" and "Bugtraq rocks my scary little world" posts seem to have been made, I thought I would point my slashbot tendencies at the Treaty of Rome.

    <SLASHBOT>
    The EU will soon be *easily* the largest economy on the planet (except China. OK, Maybe India. You know what I mean). 500 million eager consumers with shedloads of cash. Enough cash to support some *very* fat lawyers. In the EU, we send our fattest, most offensive lawyers to Strasbourg, where they can do most harm.

    Then we have this little thing called the Treaty of Rome, which has much the same purpose as the US Constitution, except you can't fit it on a sheet of A4, no matter how 'leet your PostScript skillz are.

    Article 85 of the Treaty of Rome says some interesting things [antitrust.org].

    One of the things it explicitly forbids is arrangements to establish contractual conditions that bear no direct connection to the subject of the contract, like tie-in clauses.

    Now, If global giants like Sun, Cisco, Microsoft etc. use a forum like the one they have just set up to restrain trade, you wouldn't need a lawyer to win an antitrust case against them My blind old dog (if I had one) could win it.
    </SLASHBOT>

    So, there you go. If they do *anything* that pisses off the EU commission, they'll get nailed to the proverbial tree.

    For those too stupid to work out how to get rich here, all you need to do is to start up a tech company that relies on one of their products in a way that directly competes with them or one of their "valued partners", wait for a security flaw to be announced, prove that they did not disclose it to *all* their customers at the same time and *BLAMMO!* a lot of fat lawyers get even fatter over a period of several years.

    If I had ~50 million Euros to burn, I'd do it.

    Share and enjoy.
  • I agree that the "Information Cartel" is out of control. And I agree that politicians aren't beholden to the people anymore. However, I disagree that we should go on a fucking killing spree just because we feel like it.

    The stupidity and insanity displayed by the current government of the US of Corporate America is enough to piss anyone off. But let's be realistic. We don't need the politicians to fear us. We need them to respect us. And having someone kill politicians is just going to give them an excuse to further erode our freedoms. If you give them a reason, they will leap at the opportunity.

    I agree with your sentiment, but the time of killing the president just because your pissed off at our lousy government I believe to be at an end. Here's an idea, how about trying to convince the rest of America to get off their ass and vote their concious. Don't listen to the assholes that say we are "ruining" our society if we vote for anyone other than the big duopoly candidates and vote for someone that will make a change.

    We the American people are just as responsible as the assholes that are the presidents and congressmen of this nation. Sad as it may be, you shouldn't kill them just because they didn't do what we wanted them to do when we voted them in. We will not earn their respect, or their fear by killing off a few of them either. All we will earn is more breakdown in the basic cloth of freedom. The reasons are simple. They already think we are criminals just because we breath air. If we give them that one little excuse, they will slam down the iron fist they have so long been hiding behind the velvent glove.

    However, having said that, I think that a full-scale revolution might do the trick. It will take more than just removing the president or any other "key" members. Until you eliminate the entire process, it's going to remain the same. But let's face it. Most American's are obsessed with laziness, and revolutions are hard work. The American people, the people that should be concerned with the constant erosion of their freedoms (in the interest of protecting them from their own stupidity), are far, far more interested in sitting on their couches, throwing back a few doritos and beers, and watching the latest garbage the information cartel is shoving down their throats through the "magic" box.

    Re-educate the masses. Eliminating the stupidity at the top will not eliminate the stupidity throughout the system. That stupidity is rooted in the American people themselves. Hopefully someone can figure out a way to wake people up. If not, I'm afraid our children are going to be left with a shitty world.

  • This is another one of the disturbing security trends I've seen recently; the way some companies--and in this case several togather as a group--turtle in the face of security threats.

    If you ask me, there should be less reaction to this sort of thing and more action. I don't hold a lot of faith in the big companies any more. I believe in the little fellows who work on stuff like the BSDs (now -they- understand security issues).

    Hell, that Interbase backdoor wasn't dealt with by Borland/Inprise, but by OSS hackers. I say bring security concerns into the light, and let some more open minds worry about things like this. As a user and developer I would like not to be left in the dark by these close source, and closed minded people.

  • You're missing the point. Having access to the source does not make it secure, and no one is making that argument. The argument is that access to the source provides you with the opportunity to make it secure. An opportunity you absolutely positively do not have with closed source. Thats the entireity of the argument.

    Talk about letting the rhetoric begin. You build up this big straw man and expect people to kock it down. Well OK, "poof" your straw man is blown down.

    The Open Source argument is about access. Its about giving everyone (yes, even the bad guys) access to the source code. In a closed source world, the bad guys may already have access to the source code, but you certainly do not. The opportunity to find and fix things, such as security vulnerabilities (and backdoors) exists.

    If you can't grasp this, then you've missed them entire point behind the free (as in speech) software movement.

    The "security thru obscurity does not work" argument refers to security that depends on obscurity to succeed. If your entire security model rests on the proposition that no one must even find out how it works, then your security model fails the moment that obscurity evaporates. Which is a bad security model. Plain and simple.
    Python

  • by nestler ( 201193 ) on Tuesday January 16, 2001 @01:11PM (#503733)
    This business of keeping security flaws secret amongst members of this "elite" group is hardly meaningful given that the bulk of security flaws in these companies' products are probably found by people who are outside of the group.

    This policy will only matter in the event that someone within one of these companies is the first person to discover the flaw.

    Given that many flaws will be found by people outside of this group, and that it only takes one source to leak a flaw, I doubt this supposed secrecy will be very secret.

  • You're missing the point. Having access to the source does not make it secure, and no one is making that argument. The argument is that access to the source provides you with the opportunity to make it secure. An opportunity you absolutely positively do not have with closed source. Thats the entireity of the argument.
    First off, it's my point. I can't miss my own point, in terms of direction, unless I'm totally incompetent.

    Second, the reason I make my point is because there are a lot of different mantras that are OFTEN repeated by open source advocates, so-called security-buffs, slashdot zealots, etc. Just because you have not heard them does not mean they do not exist. It's awefully presumptious of you to ASSUME that you have heard it all. If you doubt what I say, then simply go to numerous forums and read them more carefully, even slashdot, and I assure you that you will hear words to that effect.

    Third, I submit to you that Open Source's contribution to security is neglibible for 99.99% of the population, if the "entirity" of the argument is that "YOU" (as an individual, without the hyped "sharing" of security amongst those individuals) can fix everything yourself. In other words, unless you know C, security, kernel architecture, etc. well, the odds are that your argument simply does not apply to yourself. What's more, even if you have the skills to detect a bug or two, you probably don't have the inhuman ability to fix them all--it just takes ONE to send you up shitcreek.

    Fourth, many people CARE about my point, because I'm arguing effective security--not your moralistic "liberty", "freedom", or what have you. The upwards moderation would suggest that at least a few people think my comment worthy.

    Fifth, I have other related points, that apply directly to the comment I was replying to one level above this (his viewpoint ties into one of the mantras). For instance, he essentially stated that it's impossible to put a backdoor in Open Source code, while it's possible in Closed Source. Well, there is no real defence for his position and I attacked it.

    The "security thru obscurity does not work" argument refers to security that depends on obscurity to succeed.
    No, that depends entirely on who you ask. Many people take "security through obscurity" to mean that the only way that closed source software is secure is because it's obscure. Thus any closed source software is percieved as insecure and, conversely, open source software is necessarily (more) secure.

    If your entire security model rests on the proposition that no one must even find out how it works, then your security model fails the moment that obscurity evaporates. Which is a bad security model. Plain and simple.
    Sure, I'd generally agree with that statement. However, it is a little too broad. It depends on the environment, the users, etc. Much like the argument for certain cryptography, if cracking it costs more time, effort, and resources then what you could gain, it IS effectively secure. So plain and simple? No.

    Likewise, the same applies to Open Source code (as so many people ignore). You can't merely code something, make it open, and just trust it to be secure--no matter how long you wait. Which raises another point (if you accept the previous position), you MUST trust the developers if you CAN'T entirely trust the "peer review" process (or yourself).
  • I think that its great that companies want to better protect themselves. I like that they've taken the initiative and at least have this new group formed. What I don't like is how they will protect themselves, and only themselves.

    Personally, and maybe I'm off-base here, I think a more public forum - though significantly more discreet than modern media - would better suit addressing security issues than a privately vested group. I mean, great, now all the "big" tech companies are helping to cover each others asses. But who's looking out for the mid-sized companies, the small companies? Sure, we could say that the big fish are going to be targets for problems more often, but that's really narrow minded and a bit selfish.

    Anyway, I'm glad to see this happen, but I would feel better knowing that they were looking out for more than just themselves. Perhaps I'm becoming more ideallistic lately? I don't know. Perhaps I misread what the article was saying? Anyway, there you have it, my (our) take on things.



    Looks like we missed out on some juicy patent discussions whilst we were out... damn.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...