Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

Feature:Obscurity as Security 192

Matthew Priestley has taken a break from slaving for the man to write us a piece where he takes on the convential wisdom that Security through Obscurity isn't secure at all, and tries to argue that sometimes it is. Click the link below to read it. Lots of interesting stuff and some good examples. Its worth a read.
The following was written by Slashdot Reader Matthew Priestley

Obscurity as Security

Disclaimer: The author of this paper works for Microsoft, but his opinions may not be those of Microsoft. In fact, they aren't. The author hereby declares that nobody important is even aware of his existence and that the closest he has ever come to plotting with Bill Gates on the Master Plan was when they used adjacent urinals this one time. The author did not peek.

0 Introduction

With the popularity of the open-source mindset, a general contempt has drizzled upon all forms of obscurity. The concept of security through obscurity (STO) in particu lar has been decimated. Security through obscurity, which relies on the ignorance of attackers rather than the strength of defenders, is dead in all but practic e. The victory of the opposing full disclosure approach is so complete that proposed ta ctics die at the mere hint they are a form of STO.

This paper suggests security through obscurity can and does work in certain strictly limited ways, and should not be eliminated unthinkingly from the admin's arsenal. It further implies that the boundaries between STO and 'real' security are blurry and deserve evaluation. However, this paper in no way proposes obscurity as a method for keeping secrets in the long term.

1 Full disclosure does not apply to instantiated data

Instantiated data - the data used by specific instances of an algorithm - do not fall within the scope of full disclosure. Were this not so, then even the simplest password would violate the ban on security through obscurity. Passwords are secrets known only to their creators, and password entry is commonly obscured, as in the case of the 'shadow' login of UNIX. While the login protocol may be open, passwords themselves are a form of STO, with obscurity localized in the password string.

Instantiated data are exempt from full disclosure because the risk from their failure is limited. When a script cracks a password, the damage done to the secure system extends only as far as that password's scope. The cracker cannot use the compromised string to gain power directly in another system, even if that system runs the same password protocol. Nor can anything be inferred about the value of one password merely from the value of another with equal or lower permissions.

A similar example of instantiated data obscurity is the private key that forms the basis of asymmetric cryptography. So obscure is this information that it is rare for even the owner to be familiar with its precise value. But such obscurity is a necessary element of modern security schemes. Strong security does not eliminate obscurity - rather, it localizes obscurity to instantiated data. The phrase in cryptology, 'carry all security in the key' might be better phrased 'carry all obscurity in the key'.

2 Full disclosure does not apply to time-limited secrets

Secrets that expire after a short lifetime can be protected by a wider array of techniques than long-standing secrets. The defense of information that will be irrelevant in a matter of hours or days may not warrant fully peer-reviewed security. Consider the famous Navajo code-talkers of World War II. Among the Americans coordinating the at tack against Japanese-held islands in the Pacific were a number of Navajo Indians, who spoke a slangy version of the complex Navajo tongue. Commands from HQ were issued through these code-talkers, who encrypted and decrypted with an alacrity that belittled the automated methods of the day. This is an excellent example of time-limited security through obscurity. Secret languages are excellent security in the short-term, but however cryptic Navajo may be, it is a code subject to human betrayal. Use of Navajo against the Japanese much beyond the 3-year window of the war would have been unwise. But because the secrets of American strategy in the Pacific were irrelevant after the conclusion of the fighting, the long-term weakness of obscure Navajo as a security measure was unimportant.

3 Obscurity serves as a tripwire

Perhaps the classic example of wrongheaded STO is the administrator who modifies his web server to listen on a nonstandard port - thereby confusing attackers, as the theory goes. Considering the degree to which tasks such as port scanning can be automated, the naivete of this defense seems plain. The cracker might be forced to check all 64512 unreserved ports, but eventually the concealed web server will be found. This appears to be a weakness of STO, but if manipulated correctly, it is in fact a great strength. Imagine that our same admin had also invoked a tripwire script and set it to listen on one or more unused ports. When the tripwire is probed with a SYN packet from a cracker trying to locate the web server, instantly the system goes to full alert. The packet is logged and the admin's pager sounds like an alarm.

Such tripwire approaches work because they do not expect obscurity to keep information hidden. Rather, they obscure information as a ploy to force invaders into showing their hand. Because the obscured implementation differs on each system, crackers must resort to guess-check scanning before attacks can commence. But tripwires are deployed throughout the system, anticipating this very move. Running an automated kit suddenly becomes a risky proposition, and even talented crackers must gamble on, for example, whether 'root' is really the name of the primary account or merely a hotline to the authorities.

Lighthearted implementations of this approach are a staple in the popular "Indiana Jones" films. In one scene, Jones is confronted with a hallway of lettered tiles, all seemingly alike. To cross safely he must step only on those tiles with letters corresponding to the secret word 'Jehovah'. The penalty for a misstep is to crash through the floor and plummet into a gaping pit. Attackers not privy to the password would find an exhaustive search less than optimal in this case. When traps are mingled with genuine data, STO can be a powerful disincentive. Such measures do not make a given machine resistant to breach in the long term, any more than medieval moats could ultimately protect their castles. But like moats, tripwire obscurity provides a critical buffer against attackers, allowing defenders room to breathe.

4 Asymmetric cryptography exhibits traits of STO

Despite the notion that asymmetric cryptography such as RSA is 'real' security, in some aspects these methods resemble STO. Indeed, this entire class of cryptography is founded on the hopeful guess that a certain mathematical problem is intractable. The back door into cryptographic methods that rely on multiplying primes is, quite simply, to develop a swift means of factoring those multiples. This NP-time problem must be solved before a private key can b e derived from its corresponding public key, and the notorious difficulty of NP problems leads some supporters to characterize asymmetric cryptography as 'prova bly secure'. This is far from the case - there is uncertainty among mathematicia ns as to whether this problem will even prove non-trivial once approached from t he right angle. Startling progress has been made in solving similar 'impossible' problems using innovative ploys - for example, DNA computers can now solve the Traveling Salesman problem in linear time. Given that asymmetric encryption is used widely in the world's e-commerce infrastructure, the repercussions when this piece of obscurity is cracked are disturbing to contemplate.

One telling argument against STO is that it promotes a false sense of security, leading admins into complacency. But the complexity of asymmetric cryptography, combined with reports of its infallibility, can produce much the same effect. Co nsider this social-engineering exploit of digital signing. Using a tool such as m akecert, the cracker generates a root certificate with the name 'Verisign Class 1 Primary CA' and uses it to sign an end-entity certificate with the subject 'CN=Rob Malda, E=malda@slashdot.org' (CT:Please don't. I'm used to posers pretending to be me in Quake, but not on email ;) The cracker then sends the email to an enemy, using a client that does not validate e-mail addresses and spoofing the return address friendly name. The inexpert recipient, thinking all is in order and knowing that digital signatures never lie, trusts the root certificate and hence forth carries on a conversation with a false CmdrTaco. Only scrutiny of the headers will reveal the mail is actually going to a different address. The widely made claim that public-key cryptography is 'real' security and completely unrelated to 'false' STO delivers a more powerful illusion of security than anything an XOR'd password file can provide.

Even brute-force cryptanalysis has parallels in STO. Suppose we wish to conceal the passwords for a number of Swedish bank accounts. We resolve to write them to a secret location on our hard drive, perhaps a few unused bytes in a file sector. Only we, who know the lucky offset, can read the data. This form of concealment is a typical case of secruity through obscurity. The integrity of our secret depends on the ignorance of the cracker, and a trial of all 2^n possible locatio ns compromises the system. But in what way is this fundamentally different from the 'genuine' security of n-bit encryption? To break this form of security, 2^n keys are generated and tried agains t the cipher text until the result is a plain body. Is the difference between this 'true' security and the 'false' STO merely than n is considerably larger in encryption than in the case of hard drives? But this implies that our real error lay, not in reliance upon obscurity, but in having a hard drive of insufficient size!

5 Conclusions

Security in the absence of obscurity is not strictly possible, but good systems both localize and advertise their points of obscurity. When the admin is fully a ware of the obscurity in a system, tripwires and instantiated data can provide a useful complement to more rigorous security techniques. Obscurity cannot keep information safe or concealed for long, but it can make attacks risky and destroy the effectiveness of automatic kits. These benefits should not be dismissed as an article of faith.

This discussion has been archived. No new comments can be posted.

Feature:Obscurity as Security

Comments Filter:
  • Turning off ICMP ping access to a host you want to keep hidden will usually ward off casual script kiddies. In this case, you're obscuring the fact that the machine exists at all.

    However, this is always part of a larger overall scheme to keep intruders out.
  • I think this was a very informative article. I believe it brought out some points that I hadn't thought of before.

    Alot of us here at /. don't know everything about security already. I am a programmer and I really like to learn new things. Slashdot isn't just news. It is whatever Rob wants to post. Thanks for the great essay.

    geach

  • by dgerman ( 78602 ) on Tuesday August 17, 1999 @04:19AM (#1742888) Homepage
    > DNA computers can now
    > solve the Traveling Salesman problem in
    > linear time.

    The Traveling Salesman Problem is NP-Complete. If the DNA computers are able to solve it in linear time, P = NP, and the most important problem in Computer Science would be solved. I believe you meant: "DNA computers can now solve _some_ instances of the TSP in linear time, which is far different from your previous claim. There are, of course, some algorithms that give you a good approximation to the solution, but they don't "solve" the problem either.

  • Some people need things spelled out for them.

    Yes I know .. stupid pun.
  • by Anonymous Coward
    Maybe it wasn't blatantly obvious to a Microsoft employee.
  • ... but it shouldn't be the ONLY security measure.

    The criticism of STO is usually directed against systems that use STO as the primary or only security measure.

    -- Robert
  • by Sensor ( 15246 ) on Tuesday August 17, 1999 @04:29AM (#1742893)
    What a truely afwul set of sentiments... I

    The whole concept of STO is based around the idea that your attacker does not know the system that you are using to protect your data.

    This is radically different from not knowing a particular piece of information (ie the key) in order to access data.

    In the first case an inexpert crypto designer may find that their 'security' system infact contains a large number of clues as to its structure. Take for example a simple substitution cypher, while it might appear to the naked eye to be a random collection letters an experianced attacker would simply build a histogram and break the code based upon the distribution.

    Moreover STO can quite easily become an argument for leaving holes (ie possible buffer overflows) open because "nobody knows about it". Quite simply this sort of sentiment has been shown to be very wrong over the years.

    I find the arguments about hidden trip wires more interesting - but I would argue that this does not represent STO. The form of security may well be known to the attacker but the actual events which may trigger security alerts could be considered as an equivalent to a key.

    STO has had its day - no information which is genuinly important should be protected in this manner.

    Given that the normal definition of security is that it should cost the attacker more to breach the defenses than the information itself is worth - information with a very short lifespan may be eligable for some forms of STO and I'm sure that everyone occasionally follows this maxim.

    But as a principal and as a technique I would like to see STO burried once and for all.

    Tom
  • I'm pretty sure he means linear... all you have to do is use a bigger vat. Saying that some instances can be solved in linear time is meaningless anyway... linear describes the growth of complexity when compared to the size of the dataset - when all you have is a series of points then talking about its complexity is impossible.

    Tom
  • Steganography (hidden writing) is another example of a system that relies heavily on STO. A modern example would be using low order bits on music CD's, *.WAV, and maybe *.jpeg files to move banned information past censors.

    -- Robert
  • by bhurt ( 1081 ) on Tuesday August 17, 1999 @04:34AM (#1742896) Homepage
    First off, the author misunderstands what is meant by STO. It does not mean Security through secrecy. All security depends upon a secret- be it the factorization of a composite, a secret key or password or passphrase, or the combination of a safe. The fundamental security is how difficult is it for an attacker to guess that secret.

    So, the logic goes, make it more difficult for the attacker to know what secret they need to guess. Hide the security algorithm from them. Does the safe require a combination, a key, a palm print, or some combination?

    The problem is that, by obscuring the implementation, weakness that can signifigantly reduce the amount of secret key the attacker needs to guess are hidden ("Um, it's a bad idea to put the door hinges on the outside of the safe. I don't bother picking the lock- I pop the hinges and simply remove the door.").

    And this is the advantage open-source has. It's peer review limits the existance of such backdoors. And fixed faster when they are found.

    Do not think for a moment that restricting access to the source code makes it less likely such vulnerabilities will be found. The "black hats" (be they the evil hackers or the evil NSA) always seem to have enough time to reverse engineer the software from the binary. The "white hats" generally have better things to do with their time. By making it harder for the legitimate people to look for security holes, you are simply making it more likely that the people finding the security holes will exploit them, and not announce them. By making it illegal to reverse engineer the products, you're gaurenteeing this.

    If you don't beleive me, I recommend Bruce Schneier's "Applied Cryptography".
  • Acording to the dictionaries I looked it up in after my Anthro professor made an ass of himself on that topic, decimate can also be legitimately used to indicate the detruction of a "large but unspecified portion of a population" (my own wording). It is however, incorrect to say X decimated all of Y, or X decimated 40% of Y.
  • This isn't intended to be news, Mr. Gibson. This is a feature. Give the guy a break. You may want to disable features in your preferences [slashdot.org] if you're only looking for hardcore news. And I'm sure you don't have Jennicam or Segfault in your slashboxes either, right?

    As a public service, you may also want to avoid the following stories (they contain a great deal of common knowledge - be careful!)
    Essay on Open Source as an Art Form
    Sun Claims MS Steals Vision
    High Tech Junk

    I, like many others I'm sure, was not aware of the Navajo story and also simply enjoyed the way the article was written. I give the guy credit, being a Microserf and posting on /.
  • by Anonymous Coward on Tuesday August 17, 1999 @04:36AM (#1742899)
    Really, this article has very little to do with security through obscurity. STO, as implemented by Microsoft, is basically the premise that, because their code is not publicly available, the bugs in it will not be obvious, therefore not easily exploited. Proponents of STO argue that open source systems are more vulnerable because anyone can get the source and find the bugs in it, and thereby exploit these bugs. But the real problem is simply that because the systems which use STO are never as well debugged as their peer-reviewed open-source counterparts, an as-yet-undiscovered bug could be lurking around the next packet. Users must rely on the assurances of vendors, motivated by profit, that their systems are safe; on the other hand the users of open source systems are assured of better quality and quick fixes since the user population is itself selfishly motivated to make the system better and safer.

    Granted, this is nothing new, but I thought I'd post something more on topic than the article itself. Stating that encrypted passwords are a form of security through obscurity, as the author does, is just plain silly. If you're going to talk about encryption and STO, it's more relevant to delve into the fact that a closed-source encryption algorithm (a la Clipper) is inherently unsafe because it isn't peer reviewed and therefore has a much greater likelihood of eventually being broken because of the authors' undiscovered mistakes, than does an open-source one which has been examined by a wide audience.

    But then, what do you expect...
  • What I would like to see is a step by step HowTo on security, for both individual hosts and networks.. Kinda like a "Security for Dummies" guide!
  • For a secret key crypto algorithm, as the name suggest there is a SECRET. It's NOT STO. One of the most famous algorithm in this category is DES wich is well publicized and cryptanalysed, this algorithm implied the secret key to be secret but you can have everything else even the S-BOX, and your are clueless.
    I agree with 'carry all obscurity in the key'. But this is better than carry obscurity everywhere. For my house i prefer to have to take care for the key and not for the key AND the door.

    When you say to illustrate your talk: "When the tripwire is probed with a SYN packet from a cracker trying to locate the web server, instantly the system goes to full alert. The packet is logged and the admin's pager sounds like an alarm."
    It's not STO it may be well know from the attacker without allowing him to bypass this trap.

    Your exemple about digital signature is only a way to demonstrate that you have a weak protocol to verify public key. RE-read applied crypto!

    Abscence of secret is imposible, but it's different from obscurity.
  • In a perfect world, this knowledge *would* be old hat to every sysadmin... But this world is not perfect, so not all the info can be taken for granted. I've been reading /. for over a year, been messing with linux and such for about the same period, but I am more of a computer gamer, so I don't have the m@d aDm1n sk1llz. This stuff is new to me, and I bet its a good reminder to those who are veteran sysadmins who may have forgotten this stuff. So, uh, like lighten up.
  • So security may decrease obscurity and therefore my actual security...?
  • What open source is about is exposing the methods, how the software works. It doesn't expose the password of other secrets. Many cryptographic algorithms are open to see if there's a flaw in the process, nobody would (or should) give their secret passphrase. For the certificate system to work you must trust certification authority.

    Currently the way you can register a certificate there is no trust between the certificate holder and certification authority, in short certificate this way are bogus, they are just repository of dubious information.

    Sure you may put traps in a system to find intruders, but the rest of the system should be secure. Obscurity will gain some time, but it's not the solution, remember it's the way things work that should be open, not the sensitive data itself.

  • Security is theoretically an impossible task. Given enough time and enough access any code can be broken any hiding spot can be found. Practically, security is about building the cost up high enough that it becomes too expensive to crack. Hence the code talkers where too difficult (timewise) to crack for the benefits. Enigma, the German code system, remained valuable for a significant period of time, hence cracking it for each and every day, was economically viable. Even an STO system as proposed, i.e. switching in to full alert when a wrong pattern is tried, would be worth cracking if the value was high enough.
  • by Matts ( 1628 ) on Tuesday August 17, 1999 @04:42AM (#1742907) Homepage
    While this was an interesting article, it wasn't a good defence against STO. The author appears to be arguing that STO, while isn't a good security defence, it can make a good security buffer. These are totally different things. So really he's saying that STO is good to have in your toolbox, it's not a good defence - this is what we've been saying all along.

    Having said that, his arguments are totally bogus. Saying that passwords are obscurity is nonesense. When we speak of STO we're talking about source code, not passwords, keyfiles, etc. Trying to defend STO by talking about the success of the navajo indian code is stupidity in the extreme. There are working practices available _today_ without using STO, so why would anyone bother with a "time limited" crypto?

    So just what _was_ this microserf trying to defend? It seems very unclear to me.

    Matt.

    perl -e 'print scalar reverse q(\)-: ,hacker Perl another Just)'
  • I grant you that Security Through Obscurity is far more interesting than real cryptographic or algorithmic security, allowing the administrators to play stimulating cat-and-mouse games with the attackers.

    The point of cryptographic security is that a very large amount of carefully verified work, the work of experts which cannot be easily duplicated, can be invested in the cryptosystem. The system can then be used by anyone, expert or not, any number of times, by just providing a passphrase. STO requires an expert to devise a new intricate ploy each time.

    Cryptography relies on certain algorithms having a minimum order-of-magnitude cost, and hence is vulnerable to spectacular algorithmic advances, but the problems are never meant to be intractable in the sense of "mysterious". A properly peer-reviewed cryptosystem is not weakened even if all of the scientists who invented it subsequently become traitors.

    Pavlos


    PS. Your point about engendering a false sense of security is correct, but the reason is that the users of the system wchoose weak passwords, leak them, etc.

    PSS. Your point about encryption vs. hiding data in your drive is a revelation. I have already ordered my 340282366920938463463374607431GB hard drive!
  • I'd never heard of DNA computing before, so I followed the link and took a look.

    I'm certainly not qualified to know how much of it is really feasible, but it is quite interesting. However I'm not convinced that this technique would really solve the problem in linear time.

    My reason for this is just a hunch, but it seems intuitive. They assume that the process of taking n DNA strands, chucking them into a bucket (so to speak), and mixing, will only have a linear cost. I'm not convinced this would be the case.

    Visualise n to be fairly big, and each DNA strand to be fairly long. Now picture n strands of this length put into a mixing process. Now I'm not a chemist, but wouldn't tangles and other geometric issues, if not sheer weight of numbers, render the cost of adequate mixing worse than linear?

    Put simply, if it takes, say, 10 milliseconds to adequately mix 10 strands, do you think it would really take just 1000 milliseconds for 1000 strands? Or 1000000 for a 1000000? Maybe it would, but I'm unconvinced. If nothing else, I doubt that a linear relationship for this is proven.

    Or maybe I just misunderstood the whole thing. Please tell me where I've screwed up :)

  • Steganography doesn't have to completely rely on it, though. Encrypt the data before you hide it, and that will make it look all the more like random noise.

  • You're being too generous, I think. If you want to know when someone accesses your website, why not just turn on logging in the webserver? I'll grant that knowing someone is scanning you is helpfull- but that doesn't keep the hacker out of the website, does it? It's kind of like saying that if you have security cameras, you don't need locks on your doors.
  • I take some exception to the definitions used in the article. Obfuscation of information has never to my knowledge been equated with "security through obfuscation." Equating the two makes makes both terms meaningless.

    Security through obfuscation relies on a cracker's ignorance of a system vulnerability for protection, as opposed to disclosing the systems specs and subjecting them to rigorous peer review, allowing the vulnerability to be exposed, analyzed, and fixed.

    Encryption is the protection of DATA (whether it is a password, data file, or filesystem) through obfuscation of the DATA. Obfuscating DATA is not the same as obfuscating a system architecture in the hopes no one figures it out. To define the two as the same is no different than defining apples and cucumbers to mean the same thing, and leads to the same meaningless results (an inability to differentiate between the two until new terms are invented to compensate for the obfuscated and undermined definitions of the old words). The desire to not disclose confidential data residing on a hard drive does not imply that one is relying on "security through obfuscation" rather than strong, publicly reviewed security approaches. The two concepts are in many ways completely orthogonal to one another.

    About the only thing the article got right is the notion that, if the data need only be protected for a brief time, inherently less secure approaches may be used with some success. What is entirely glossed over, however, is that in using a less secure but perhaps more expedient approach one is still taking a terrible gamble, as there is a greater possibility the data will be compromized earlier than desired than if a more secure approach had been used. The probability may be small because of the limited time frame involved (making the risk "worth it" perhaps) but the possibility is nevertheless quite real. What I do not understand is WHY anyone would want to do something like that, when well documented, secure ways exist for protecting both transient and long term data and systems exist, making that sort of gamble unnecessary to begin with.
  • An excelent article. However, I don't believe your example of a social engineering attack on digital signatures is quite right.

    The recipient's email client should check the authority signature of the sender's certificate against the known trusted Verisign certificate. In this case they wouldn't match. You can't make them match unless you know Verisign's private key.

    The user will be warned that the identity of the sender could not be verified.
  • I agree...it was a well thought-out article. Not all of us know everything there is to know about security.

    Try not to be an "info-snob" and give the guy a break...

  • by My Little Pony ( 71274 ) on Tuesday August 17, 1999 @04:50AM (#1742916) Homepage
    1. There is a gross misrepresentation of digitital signatures. You can make all the CA certs you want claiming to be any number of well known CA's. The real CA's published public key is used to verify the user's cert, not it's name. Good luck guessing Verisign's private key. You're only hope is to hack the recipient of the email and replace Verisign's public key with the public key of your cert. (Note: I hope Microsoft's mail programs use PKC and not common names to verify certs. God help us!)

    2. The idea of hiding a web server port amongst a bunch of trip-wire ports misses the fact that 80% of (corporate) cracking is internal. Trusted users don't have to port-scan to find your hidden port. Any what do you do if your traps go off? Take the server off line? Can you say "Denial of Service"? OpenSSL is free. Basic HTTP authentication is pretty secure over SSL. That would help me sleep at night...

    3. The point about time-limited security is good, but (and not to be mean) who cares? Mr. Priestly cites an excellent example. Now cite one related to computers. Not very many good ones...

    4. The author seems to be spiraling in on what those in the security biz call "enticement information." This is information that you don't have to give out that only makes cracker's jobs easier. For example, a telnet banner (and you should be using ssh!) that says "This is the MegaCorp Accounts Payable SAP server running on Windows NT 3.1 (never patched!). Authorized use only!" should be replaced with "This system is for authorized use only. All activity is monitored."
  • ... but what part of Security Through Obscurity is any more mature than some "admin" chanting "I bet you don't know my password, ner ner"?

    I knew there was a reason we were told to keep passwords "uncrackable" (in the brute-force-utility sense of Crack), or to use Kerberos or other means of keeping things secure...

    As someone who's had a machine broken into by means of an ethernet sniffer on someone with a weak (crackable) password used in two places, who was saved from a root login by various ttys being insecure, I no longer even let my passwords out from my current domain in plaintext; sometimes they don't go that far either. Ssh is only one means of doing this, but it's a bloomin' nice means.

    hackmeplease:vOyo1GrTWU6Ck:10001:10001::/tmp:/tm p/sh.root.sh
    to them too :)

    ~Tim
    --
  • Exactly /what/ in that article wasn't common knowledge?

    That P=NP?

    What is Slashdot coming to these days?

    Yeah, you'd think news like that would rate its *own* item :-)

    (Note: I didn't see anything *on the referenced site* that *claimed* that this method solved the Travelling Salesman problem in linear time. Hint: more cities need longer and more chains.)

  • I agree. This sounds like crapola to me!

    And the link provided does not demonstrate the travelling salesman problem the way I learned it. We learned it such that the travelling salesman _was_ concerned about the distance travelled, which makes the problem MUCH more complex. Plus, with this addition, the travelling salesman is revealed for what it truly represents: circuit design.

    The problem is of course VERY simply when there are only 3-10 cities involved, but grows to unsolvable proportions as it grows to 100 cities and beyond. (2^100 ~ 10^30). Note that earth has only been around for approximately 10^27 milliseconds.

    The problem showed grows complex only at a linear rate, not n^P... Oh well...
  • by Todd Knarr ( 15451 ) on Tuesday August 17, 1999 @04:58AM (#1742920) Homepage

    I'd take objection to his calling the use of NP-complete problems "security through obscurity", specifically his statement that we're depending on the hardness of the problems for security and that they may not be that hard if we approach them right. From what I recall, what we're depending on in reality is the fact that NP-complete problems are harder than any other known problems, and provide an upper bound on the hardness of problems. We don't know that NP-complete problems are neccesarily hard to solve, but we do know that any other problems are easier to solve. Even if someone invents a linear-time method of solving an NP-complete problem they're still harder than any other problems, the upper bound just moved down a lot. This is a problem for cryptography, but it's not solvable without rewriting the rules of mathematics.

    As for calling a secret key "security through obscurity", he's missing the point. STO canonically refers to keeping the details of the underlying algorithms and/or implementation unknown. It does not typically refer to the idea that there's some piece of information that a legitimate user possesses that an attacker does not.

  • by Anonymous Coward

    Consider some simple, general purpose Turing machine, so simple you have to write the "program" on the tape as well as your data to make it go (just like a card deck :-) ). Write out your data to be encrypted, your encryption program, and your password onto the tape. Run your machine so that the result is a tape with encrypted data, the program, and the password. With this data either you or your enemy can reverse the process. What part of the tape do you obscure? Usually the password, but in our system that's just a part of the tape.

    If you use a unique and clever algorithm for each encryption run, and the same password, you would need to obscure the algorithm to be safe.

    One can construct a theoretical framework where it's all STO. By obscuring only the password we simply make the job easier for our customary way of doing things. I don't have to remember and type in my whole decryption/access program, just a short string. If we have a very good encryption algorithm we force the attacker to guess the key, hence we force them to fight by our rules. IMHO it all comes down to that.

  • Why? To me, decimating 40% of the group would mean splitting off 40% of it and decimating that 40%, killing 4% of the total group. However, this is surely not the intended meaning in this case.
    --
  • That is not necessarily true.
    Using a One Time Pad and the low order bits in pictures, audio etc. is 100% provably secure.

    It is also possible to exchange information during seemingly innocent cryptographic actions; it is possible to hide information in a DSA signature for example, and even if you look carefully it is impossible to prove that there is hidden information or not.

    Only the simplest steganography methods only rely on STO.

    EJB
  • Looking at the link he gave, the DNA-computer based solution doesn't even claim to be a linear time solution. Given a finite number of paths between TSP points, and assuming the "Mixing" stage takes negligible time, you will get a number of non-unique DNA strands. Each DNA strand can be checked in linear time. The problem can't be solved in linear time.

    ----
  • ... all is in the name of the company.
  • I think this article would been much better if the author had not insisted using the term obscure too much. For example, while talking about passwords and keys he uses obscure when secret would have been more proper.
    Security in the absence of obscurity is not strictly possible.

    Well, it is actually, but I will agree that security in the absence of secrets is not strictly possible. There is a difference.



    --Flam

  • by Frater 219 ( 1455 ) on Tuesday August 17, 1999 @05:07AM (#1742928) Journal
    DO NOT block ICMP ECHO_REQUEST / ECHO_RESPONSE ("ping") packets. If you do, you will confuse systems, such as some routers, which use pings to determine shortest paths. A Net-connected host is required to respond to pings.
  • You might be interested in this link..

    http://www.ecst.csuchico.edu/~dranch/LINUX/Trini tyOS.wri
  • > We learned it such that the travelling salesman > _was_ concerned about the distance travelled,
    > which makes the problem MUCH more complex.

    Actually not. There are two variations of the TSP problem: one is to "decide" whether there is a path of less than x units between the cities, and the other is the "optimization": what is the optimal path between the cities. Both are NP-Complete, and in that respect, equivalent. If you can answer the optimization problem in linear time, you can solve the other in linear time also and vice-versa.
  • Depending on who you ask, of course. I just happened upon some discussion of this very topic in Stephen Gould's Wonderful Life. To summarize, yes, "decimate" did originally mean "kill one in ten" (of an army that failed to win, not necessarily a mutinous army), but modern usage has changed to the point that it means "kill lots of".

    But I agree that the author's use of "decimate" doesn't read very well.
  • by Anonymous Coward on Tuesday August 17, 1999 @05:09AM (#1742932)
    Ok, one at a time, shall we?

    Full disclosure does not apply to instantiated data

    True, but that's not what we're talking about. We're talking about the code that manipulates that data. Saying that STO doesn't apply to passwords is irrelevant.

    Full disclosure does not apply to time-limited secrets

    Your example is a time-limited method, not merely a time-limited secret. In context, Navaho was used for a very short time. Expecting to use some encryption method for only a short time and not have it cracked is probably a good example against your argument, rather than for. The Japanese didn't have PentiumIIIs and Linux, so a few years was not much time to work on a crack. Doing something like that today would have the algorithm (new and untested, as per your specs) reverse engineered very shortly. You run the risk of having your messages decrypted in realtime before you stop using that algorithm.

    Obscurity serves as a tripwire

    This example makes no sense. Such trip approaches work because they detect scans; it has nothing to do with the web server, or what port it's on. I'd be willing to bet that a cracker would be able to complete his scan and root your insecure web server before the sysadmin can respond to the page.

    Your Indiana Jones example is bogus: attackers wouldn't exhaustively search. They would intentionally break all the tiles, thus revealing which ones were supported. Crackers are not interested in drama. Not until they break in, anyway.

    "STO can be a powerful disincentive" is not true; cracker tools don't care.

    "But like moats, tripwire obscurity provides a critical buffer against attackers" is not true, either; tripwire only provides detection.

    Asymmetric cryptography exhibits traits of STO

    "...quite simply, to develop a swift means of factoring those multiples." Isn't that a direct quote from Bill? I would guess you are not a mathematician. (Neither am I) These methods are believed secure because it is believed there is no such "swift means". In math, that usually means you can't just program around it. It means you cannot do it. DNA computers notwithstanding. My DNA computer can barely play my mp3s, much less decrypt my password.

    Incorrectly used crypto programs are also not the issue. The issue is that even "correctly" used STO is insecure.

    I am not a cryptographer, nor a mathematician. But even I can see your arguments do not stand up on their own merits.

  • by ~k.lee ( 36552 ) on Tuesday August 17, 1999 @05:10AM (#1742933) Homepage
    Saying that some instances can be solved in linear time is meaningless anyway... linear describes the growth of complexity when compared to the size of the dataset - when all you have is a series of points then talking about its complexity is impossible.

    I believe he meant to say some subsets (special cases) of the problem can be solved in linear time, which is no big news anyway. For example, the TS problem is easy to solve in linear time in the special case that the graph is a circle.

    The question is moot anyway, because the notion that the computer solves the TS problem in "linear time" is correct but deceptive. In Fundamental Algorithms, all CS majors are taught to think of the Theta(N) (order of growth) as the "running time" of an algorithm; they then forget that "running time" is a metaphor for a number of computations. A more correct term for Theta(N) would be "increase in computational resources as input size grows".

    A DNA computer uses the chemical and structural properties of DNA to perform an exponential number of calculations in parallel. Thus, it is not computing a linear solution to the Traveling Salesman problem, but merely throwing vastly more computational resources at it. The former is an algorithms problem; the latter is an engineering problem.

    (BTW: I suspect that the amount of DNA you need does, in fact, increase exponentially with the data set size but in general this is not a consideration since you can put an insane quantity of DNA in a vat.)

    ~k.lee
  • Damn where have you been? Theres a lot of great sites with tutorials on protecting yourself. Number one a lot of things should be common sense. Delete default accounts, use strong passwords, keep up on latest exploits via a security list, etc. Heres some links for ya ;)


    But before I dew((do)doo)d00)
    All this STO is big freaking joke.

    Well maybe...just maybe if I hide my network vulnerabilities, people will just feel bad that I'm a shitty sys admin too lazy to fix the holes in my system
    www.securityfocus.com -- Excellent resource for up-to-date news
    www.securitysearch.net -- The Yahoo Search Engine of the Security World
    www.enteract.com/~lspitz -- Very Informative
    www.hackernews.com -- The CNN of the Hacking World

  • If everything vaguely related to computers were common knowledge to every /. user, there wouldn't be much point in having /. at all. I'm sure there are many areas in computers you wouldn't have a clue in, and another /. user will hopefully do you the favor of pointing out what an idiot you are when you don't know something that to them was obvious.
  • The whole point of the NP class problems is that they can be solved using a non-derterministic Turing machine (NDTM) in polynomial time. An NDTM can be simulated on a standard Turing Machine (TM) but not in polynomial time. If you could find an algorithm that solved the TSP (an NP - Complete problem) in polynomial time on a TM then indeed P=NP. The DNA algorithm is taking a different tack it is taking a massively parallel approach. In principle it could do as advertised without violating the above. (Whether it does or not is another matter.) The factorisation question that is important for cryptography is moot. It is within the realms of possibility at a solution in P could be found. There are already quantum algorithms that could make the problem tractable, these use the same trick of massive parallelism, however no machines that could take advantage of the algorithms yet exist.
  • This was the part I really found very odd as well... isn't this exactly what browsers already do with secure sites? If the certificate is not signed by a KNOWN authority, a warning will pop up. Even if the false signer used the name/address of Verisign or whoever.

    Are we being fed FUD on slashdot?

  • Can we say that info wants to be free, to obstruct (censor) info is a bug, and to enough eyes all bugs are transparent (routed around)?

    If I put my gold in a safe and there's a 1 in 6 billion chance of 'guessing' the combination, that means there's a good chance that SOMEONE on the planet can take it - which brings us to security thru superior firepower.

    I want to put my gold in a safe that NOBODY can access except me - perhaps a theoritically impossible goal - in which case all security is
    just varying degrees of obscurity but we can put up with infinitesimally small amounts of risk for most practical uses. So the issue may resolve to mere semantics.

    But I may be hallucinating again...

    Chuck
  • 1 - Instantiated data

    Passwords are the basic means of checking authorization, whether the protocol is obscure or not. I don't see why "password would violate the ban on security through obscurity".
    I don't see either how shadow login can be considered an obscure protocol. The author admits himself that the protocol is open. Just because the passwords are hidden does not make the protocol obscure. Using the same logic one may argue that any kind of security system is obscure because it restricts access to data.

    2 - time-limited secrets

    That is correct. However, this approach is rather risky. You never know how long it will take for attackers to crack it. In fact, it would not be secure to use it more then once...


    3 - obscurity as tripwire

    OK. I get the point. But how can you connect to a web server if you don't know its port?
    Besides, "Such measures do not make a given machine resistant to breach in the long term..."
    The author concedes that this tactics can be used to *detect* the attack but not to stop it.


    4 - Assymetric encryption

    That is his strongest point. Assymetric encryption is indeed a form of security through obscurity. Just imagine what would happen if you find an algorithm to quickly factor large numbers into primes... So yes, he does make his point that any security system cannot be completely free of obscurity.


  • Others have pointed this out, but I don't think everyone will get the difference even through they are right, so here is my attempt, dumbed down a bit for the less technical.

    Imangine for a moment that I capture Rob (cmdTaco ie founder of /.) and tortue him until he gives me the root password on the main /. server. I now have a seceret, and I can get into /., but I do not have enough information to get into User Friendly [userfriendly.org] Even though(if) both run the same version on linux.

    Not imangine that I capture an enginerr from Microsoft and torture him until I get a previously unknown security hole in Windows NT. I can now break into any NT server in the world, (assuming the reqrisites for the hole are in place, obviously a system in a locked room not attached to a netwrok is safe)

    See the difference? In one area the terriorist got the ability to break into one machine, in the other the terriorist got the ability to break into virtually any machine.

    Now It is possibal that some bug exists in Linux that will allow anyone to get into it. With linux, once I discover how someone got into my machine I can fix it, with NT I have to wait for microsoft. So in reality what makes open source better in the face of attack is that I can fix it in a few hours whereas with STO I have to wait for a vender to fix it. If I'm a minor player and nobody else is attacking me, with STO my vender can leave me in the lurch, whereas with open source I can fix it myself.

  • Note that checking n strands in linear time produces an n^2 algorithm. Perhaps not linear, but definitely P. It's pointless to argue P vs NP when you're dealing with a 'computer' that can't be represented by a Turing machine- if DNA strands can perform an exponential number of operations in polynomial time, then NP problems will become tractable- but they will still be NP (and probably not P) problems.
  • Although the author's article did not increase my respect for obscurity, his implicit point - that it's a good idea to question conventional wisdom - is a good one. I witnessed a prime example of this while watching the panel sessions at the 1999 IEEE Symposium on Security and Privacy. Several of the speakers professed new respect for the "penetrate and patch" technique. Among the government-funded security research community, which spent much of the last decade or so searching for foolproof methods for producing software demonstrably free of security holes, the conventional wisdom used to be that penetrate and patch was the height of inept foolishness. But, as several panelists pointed out, in the real world it is the de facto standard, because (as any Microsoft employee probably knows) it requires no burdensome initial investment in detailed design, formal verification, and heavy-duty software engineering practices that might delay a release. Considering that the "foolproof" methods have turned out to be commercially impractical, a renewed interest in penetrate and patch may be a good thing.

    I do not speak for my employer.

    - Tim
  • Any what do you do if your traps go off? Take the server off line?

    There is an old but delightful movie called "How to steal a million" that is based on exactly that scenario. Basically, if your traps go off every fifteen minutes, you will not be paying any attention to them very soon.


    Kaa
  • But there is a catch. The trick to the DNA "solution" is that it is a massively parallel solution. The time to get the solution may be linear, but the amount of DNA you need grows exponentially. If you simulate this on a single computer, the problem takes exponential time.

    However quantum computing offers similar tricks using a superposition of quantum states to do an exponential amount of computing without taking an exponential amount of time. The last that I heard that technology was looking more and more feasible in the long-term. But it is still a good 20 years off even if it can be made to work.

    As for P=NP, I honestly believe that they are different. As for a proof, well this post is too short... :-P

    Cheers,
    Ben Tilly
  • The traveling salesman is not only solvable in
    linear time, but also in constant time!
    In fact, even a bigger class of problems,
    called the polynomial hierarchy (PH) can be
    solved in constant time.

    The catch is in the number of processors
    required to do so. This translates to the weight (or volume) of the DNA which grows
    exponentially in the size of the input.

    DNA computing does not give us any additional
    power over traditional computing: quantumn
    computing does.

    Vinay
  • by Slak ( 40625 ) on Tuesday August 17, 1999 @05:25AM (#1742947)
    Perhaps I'm wrong here, but while this DNA computer (very) arguably may solve some TSP problems in polynomial time, it does so in exponential space. That is to say, if a given DNA computer can solve an n-node TSP problem in, say O(n^2) (or some other polynomial) time it probably requires O(e^n) space to achieve it.

    This doesn't even address the question of whether or not the DNA computer is deterministic. In other words, given the same input, will it always produce the same output in the same (exact) running time? For example, I could claim that this message is encrypted with the only provably secure encryption algorithm - a 1-time pad. The fact that you are reading it means that you broke my code in not just polynomial but constant time by guessing the correct key (a string of 0s, as it turns out) does not make the 1-time pad encryption scheme (in general) insecure. I happened to have choosen a cryptographically crappy key.

    To continue my rant, the factoring is not "obscure". RSA would be "obscure" if it relied on some "magic" number - that is, in finding this single "magic" number, one could decode *all* messages encrypted with RSA. Since the public key is, well, public, the strength of the algorithm lies in the difficulty of factoring an arbitrary large composite number that has only 4 factors.

    I'm just getting warmed up here - the author's example of using social engineering to compromise Verisign's key is invalid. In this case, the underlying cryptographic system was not compromised . To turn the argument, I could just have easily socially-engineered any STO-based system just as easily affecting the same results. The reason that STO is bad is that this is not the only option. For systems using strong (non-STO) crypto, the only option to breaking it is through social engineering.

    The quote: "The widely made claim that public-key cryptography is 'real' security and completely unrelated to 'false' STO delivers a more powerful illusion of security than anything an XOR'd password file can provide." particularly ires me. The fact that a cracker knows exactly how a password is encrypted and still can't extract it is a secure system. A password "encrypted" (and I use the term loosely) through an obscure algorithm (that is, once you know the algorithm - not the key - you can get any password) is not secure. Offline, I can reverse-engineer your algorithm and run you SOL.

    Next, the example of the Swedish (ObEd: shouldn't that be Swiss?) bank account is totally misrepresented. In an STO system, a cracker would only need to run through the contents of the drive. That is, if the drive were size n, he (and it's always a "he", isn't it?) would take time t. If the drive were size 2*n, he would take time 2*t. If the author has no understanding of the difference between a linear scan of an array and the exponential search required to go through all possible keys of, say DES, then he's a moron. If he stores his Swedish bank account PIN somewhere on a 2^56 bit hard drive, than yes, he has the same security as someone encrypting the PIN with 56-bit DES. The difference of course, is that someone has to "remember" 2^56 bits (plus some to "remember" the offset) to find his PIN, while the other has to "remember" merely 56. That, is the power of strong encryption.

    Even if this guy weren't from microsoft, he'd still be an idiot!

    Cheers,
    Slak
    --------------------------------------------
    It has long been known that one horse can run
    faster than another - but which one?
    Differences are crucial. -- Lazarus Long

  • by Frank Sullivan ( 2391 ) on Tuesday August 17, 1999 @05:26AM (#1742949) Homepage
    Apparently, the author does not understand this significant distinction. Secrecy != obscurity. In this article, "obscurity" is defined so broadly as to be useless... anything that isn't spammed out across the 'net is "obscurity".

    In the world of serious computer security, "obscurity" refers to keeping the *mechanism* for storing information secret, not the information itself. In practice, the mechanism should be able to keep the information secret even if the inner workings of the mechanism are known.

    For example, the algorithms used in the US government's ill-fated "Clipper chip" were kept secret - security by obscurity. When, under pressure from industry, the algorithms were finally released, significant weaknesses were immediately discovered. RSA, on the other hand, is not obscure. Even having the source code for the actual program used to encrypt data with RSA does not significantly reduce the time required to decrypt it (consider that a PGP-encrypted message states its nature in plaintext, right at the top of the message!)

    Another, more MS-vs-OSS example is buffer overflow attacks on daemon programs. The "security through obscurity" approach is to hide the source code, so potential buffer overflows are not obvious. But hiding the source does not *eliminate* them... it just hides them. With patience and educated guesses, they can and will be discovered. By opening the source, potential overflows can be found and discovered. One need look no farther than the recent security reports on the most popular closed-source web server (IIS) and the most popular open-source web server (Apache). Which one has had multiple severe, easily exploitable security holes reported lately? In other words, the obscurity of the IIS source made it harder to find weaknesses, but not impossible. In Apache's case, thousands of eyes have pored over every line of source, and the potential weaknesses were found and eliminated long ago. Which makes you feel safer - code thoroughly studied for weaknesses by thousands of programmers, or code where only the authors have examined it?

    ---

  • typically DNA machines take quite a long time to process anything, after all there are fixed costs involved in extracting the solution but AFAIK the actual time elapsed during mixing does not grow that fast.
  • The problem with STO, where the actual "algorithm" (any steps taken to create that part of your security) is secret, is that you don't know if it is secure.
    Because it is not, or hardly (only collegues) peer-reviewed, no one has told you if you made any obvious mistakes, and no one can assign an upper- and lowerbound to the difficulty of breaking your "algorithm".

    The algorithm you describe where the admin assigns a different port to the HTTP server is not STO; it can be analyzed, and flaws can be found. (And a great many there are)

    There are of course problems with these attempts to use a general concept, such as port numbers here, as a key in a secure protocol.
    Probably the biggest is that it is not seen as a secret. If Mr. CFO goes to the companies' secret website at port 6301, employee John Doe can walk in an spot the port number in the web browsers' location bar, because the web browser hadn't thought it was a secret.

    Probably Mr. CFO is also an average user who doesn't completely grasp the fact that the URL is now an important secret, so he writes it down on a post-it note attached to his monitor.

    The other problem is that "innocent" users, such as search engines, may also scan a whole lot of ports to find a webserver, so a) Mr. Sysadmin will get a lot of false alerts on his pager and b) the information will end up for all to see in some Big Search-engine's Database.

    The same goes for Joe Hacker who may be detected but has still taken all the necessary information within 4 seconds.

    EJB
  • Gimme a break, there's a big difference between the root/admin password and an OS security hole.

    Remove theloveoftheworld to respond.
  • I'd take objection to his calling the use of NP-complete problems "security through obscurity"

    He has a point though, if only by accident. He writes:

    The back door into cryptographic methods that rely on multiplying primes is, quite simply, to develop a swift means of factoring those multiples. This
    NP-time problem must be solved before a private key can be derived ...

    The complexity of factoring is an open question, isn't it? It might be easier than NP. A quantum computer using Shor's algorithm could factor quickly (if built), so factoring-based schemes (RSA) would fall apart. This doesn't require that NP-complete problems be solvable quickly.

  • sure, don't block ping but even worse don't block ICMP host unreachable as this is used for MTU path discovery and can cause some very wierd errors. (and some frustrated users who have no idea why their transit is screwy) - Matt.
  • DNA computers avoid the problem of superpolynomial space requiring superpolynomial time by exploiting massive parallelism; unfortunately, these DNA computers require superpolynomial space to solve problems, which is arguably just as bad as superpolynomial time.

    Kyle
    --
    Kyle R. Rose, MIT LCS
  • yet, i learnt recently that the (open source) ssh
    was plagued by certain bugs (or were they features ?) that allowed breaking it -- and the product
    had been widely used before someone took the
    time to analyze the code deeply enough to
    find these bugs.

  • Clarification on passwords as STO: If passwords were always known only to their creators, then that would be plain security, not STO. But many administrators and organizations "issue" passwords, so that they are known both to their users and also a "select" group at the issuing organization. This is STO because the knowledge of the password by the "liegitimate others" in addition to the user can (like all STO info) be compromised. Further, the principle of non-repudiation is voilated: a user whose password was misued can always blame it on others who officially had access. Passwords created by, and known only to, their users don't have this problem, since there is only pure secrecy, no obscurity.
  • The claim that if there's a secret, it's not STO, assumes that it's clear-cut what is and is not a secret.

    Suppose I hide the exam answers in /home/ebcdic/old/letters. That's STO. But the way you find them is exhaustive search through my directory, just as the way you find my login password is by exhaustive search of character sequences.

    The difference of course is the scale of the search. A one character password is no better than an obscure directory.

    The important thing is to be able to quantify, or at least make explicit, the effort needed to break in to the system. With prime-product based public key, you can say something like "breaking this requires either solving the discrete log problem or X amount of work using the best known algorithm".

  • Factoring is not NP complete.

    Get a clue. For example, read Berkeley security class notes [berkeley.edu]

  • Okay, nevermind the curious definition of STO adopted by the author.

    A former coworker and I were discussing STO, and he argued, rather successfully I think, that STO is probably the best way to go for organizations like the NSA, for the following reasons.

    First, the NSA employs a great many security experts and crackers. They have enough resources to perform in-house peer review.

    Second, the odds of someone outside of the NSA being a friend who would disclose any discovered weaknesses to the NSA for them to fix is not very high.

    There is the issue of moles and such leaking the info, or the system being reverse-engineered (by sniffing air-bound packets?) The idea is that, by using their resources and internal peer-review, that they could create an algorithm that would be secure even if the algorithm was known.

    So it's not really STO, in the classic sense, but rather _Additional_ security through obscurity. Make your algorithm secure even if known, but don't give your enemies the algorithm for them to play with just to prove your point.

    Of course we both agreed that you should never use a STO-dependent system to talk to your bank or credit card company. :)
  • First of all STO is not real security. It cannot be your only or primary means of security. But in many cases STO is a convenient solution that should not be ignored.

    For instance one problem with scripts is the fact that they often have passwords encoded in everywhere. What to do about it? Well in some random spot under an innocuous name, put a program that when passed a series of semi-open secrets by the appropriate user(s) (and possibly only when called by the appropriate program) will return a password, and log attempts to call it inappropriately. In your scripts you can call on that program.

    This means that you no longer have the passwords hanging around everywhere, and it additionally means that when you change passwords, you can just change that one program and all of your scripts will continue to work.

    This is not, of course, appropriate for a high-security situation or a broadly used solution, but when your need is a trade-off between convenience and true security this judicious use of STO is a hole, sure. But it is a hole that is reasonably hard to take advantage of. (Although if you know the rules for the program it then becomes trivial!)

    Cheers,
    Ben Tilly
  • Yes, yes and yes.

    I toatally agree.

    BUT, you're simply redifining 'obscurity' to mean something different than STO defines it to be. STO conventionally means that the attacker doesn't understand your particular setup and/or system; it is 'secure' because your system is of an obscure type. If you want to re-define obscure to apply to secret keys and one-way hashes then you are totally correct; if you are trying to debunk the notion that STO isn't real security, you have failed.

    Putting a webserver on another port doesn't buy you much safety. As a worse case scenario, an attacker would be ignored after he attempted to contact an unauthorized port -- the attacker simply needs more IP addresses to complete her attack. Try to say the same thing about the shadow password file.

    You can't.

    The wheel is turning but the hamster is dead.

  • by Gleef ( 86 ) on Tuesday August 17, 1999 @05:40AM (#1742964) Homepage
    First off, while Priestly does appear to have done some research, apparently he missed researching the definition of "Security Through Obscurity". Large chunks of the article are trying to say STO is essential because you have to keep keys secret. STO refers to systems where algorhithms are kept secret, not keys or data. By extension, if bugs and security holes are kept secret, that too is STO, since it's information about the algorithm. There goes point number 1.

    Point number 2 is somewhat valid, if you only need security for a short time, you can get away with STO. However, such situations are rare, and you are just as well off with real security, so why risk STO? Back in WWII, encrypting messages for broadcast was difficult and expensive, so they needed to come up with other ways (Navahos, Enigma, etc.), it no longer is a problem. Speaking of WWII, the German Enigma cypher is a classic example of the utter failure of STO, in a time-limited environment no less.

    Point number 3 is not security. Having tripwires in place might be handy against script kiddies, but a well informed hacker can avoid them. Even an uninformed hacker has a statistical chance of avoiding them; just by trying random ports, they might come across the real port before they find any tripwires. Unlike a medieval moat, a digital moat can be just jumped over without any planning or special equipment.

    Point number 4 is again based on the just plain wrong "But you keep keys hidden, that's STO" argument, but it makes a few other points as well. Yes, if someone finds a fast way to factor huge products of two primes, public key systems fall apart. Since the best minds in the world have been working on this problem for centuries without finding much, the chance of anyone finding a good solution right away is slim. In the mean time, open, non-STO public key systems with large keys are very secure.

    The phony certificate issue is not an issue of "Open Complicated Systems vs STO" it's an issue of "Untrained users can compromise security". Public key systems offer easy ways of protecting against forged certificates, as long as they get used. User training and dillegence is a critical part of any good security system, without it, you don't have security.

    The "Swedish Bank Account Number" example isn't an example of STO at all (unless you neglect to mention to anyone that the algorithm is an XOR of a key with the data). It's an issue of key management. On the other hand, using a simple algorithm like XOR would allow a cracker to get some useful information without needing to discover the whole key. More modern security algorithms don't have this hole.

    In conclusion, Priestly has shown little understanding of the real issues of security. He has come up with one case where STO is not worse than real security (but also not better), and a bunch of arguments based on misunderstandings that show he should hit some more textbooks.

    ----
  • see, told you so
  • Ok, maybe I'm just an idiot. But isn't the TSP juust an excersise in pattern matching? It took me maybe 5 minutes to solve the version that the link in the article pointed to.
    It's just a matter of sorting connections until the string is complete. Enough conditions are given that any computer should be able to solve it in seconds. I suppose if there were 50 thousand cities you wanted to visit without every visiting the same one twice it would be a lot harder... But I think given that a solution exists I can solve anything up to 12 or so cities.
    Probalby more.

    Kintanon
  • Remember the LinuxPPC challenge? The root password will *not* in general help in getting into a Linux box.

    Cheers,
    Ben
  • Everyone in their right mind blocks ICMP from any but a handful of trusted sources. A net-connected host is required to respond to daytime as well. Quick show of hands, now, how many people run daytime?
  • Factoring is not NP complete.

    Make that not known to be NP complete. It is clearly NP. The question is whether it is complete. To which I note that if, for instance, P=NP then factoring would trivially be an NP-complete problem. :-P

    Although it does seem doubtful that it would be NP-Complete.

    Cheers,
    Ben
  • Of course, had the code been closed source, it would not have been analysed at all except by people with source code access. The bugs may then not have been found at all!
  • I think part of the point of his article is that the definition often used of STO is not in fact what the words, or even the concept means; certain aspects of STO are obviously stupid, as you pointed out, but "nobody knows about" my passwords, and I hope to keep it that way.
  • Right, and we haven't even dealt with the "Mixing" stage yet. The way I see it, given a small number of nucleotides, you aren't even guaranteed a full sample space. If you have just enough nucleotides to make a full sample space, Mixing will be slow as the number of free nucleotides drops to zero, and there's no way to prevent duplicate strands, so you won't get a full space anyway. Increasing the vat will get you faster Mixing, and exponetially more duplicates and wrong answers to check.

    As the problems get more complicated, strands are more likely to break, further compounding the situation.

    This makes any real DNA-computer solution of the Traveling Salesman look awfully NP to me.

    ----
  • I wouldn't say it was obvious... there's a difference between planned StO and having it just turn out that way.
    I guess the StO approach gets a bad rep for being the security model you're using when you aren't using a security model.
    At any rate, I guess that an awful lot of people rely on obscurity as their safety net. When you do leave a security hole by mistake, at least there's the chance that nobody will notice.
    Anyway, isn't obscurity the main weapon of the CIA et al? "need-to-know" and all? You don't see them talking about their spying arrangements openly, just so that people can mail in helpful suggestions...
  • When I speak of STO, passwords don't count. They are information points. Yes, they are obscured. But generally, STO to me refers to architecture, not information points.

    So the test that I use, is this: If the thing that I'm obscuring should be made public, how difficult is it to change that thing? If a password gets out, it is very easy to change it and retain the same level of security that we had prior to the disclosure of the information.

    But if the security of the thing I'm trying to build is based on secret knowledge of how its built, then I'm in trouble. If the only security that I use is architectural obscurity, then if knowledge of the architecture becomes public, I have to completely rebuild in order to improve security.

    For example, imagine a bank puts its electronic vault on a very specific IP address, and using a very specific port, which speaks some strange heretofore unknown protocol, and that's the only security used to protect access to the electronic vault... well that's a bank I'd be pulling my money from. Because if someone found the IP address and found the port, and reverse engineered the protocol, the amount of effort it would take to re-architect that electronic vault is about equivalent to what it took to build it. That bank would be left with few choices.

    The latter, to me, is what is meant by STO. Certainly hiding things that people don't need to know about is a good thing, but it better not be the only thing. And part of the other stuff that you use must be easily changed in the event of disclosure of information.

  • > To continue my rant, the factoring is not "obscure". RSA would be "obscure" if it relied on some "magic" number - that is, in finding this single "magic" number, one could decode *all* messages encrypted with RSA. Since the public key is, well, public, the strength of the algorithm lies in the difficulty of factoring an arbitrary large composite number that has only 4 factors.

    Actually, I think he was referring to the fact that the PRIVATE key must be kept obscure. Naturally. This likens to his issue of passwords being obscure. You can't tell someone your password, or it's useless. Same with RSA. You can't give out your private key, or it's useless.

    In any case, yes, I would say that by this definition, ALL security relies on some amount of obscurity. But big deal. This is more or less axiomatic, and the article's point seems to be moot. If that's all he was trying to say, I think it's safe to say we knew that. But that doesn't say that simple obscurity (such as the web page on a different port) is a good bit of security. It's not. It's highly insecure. But it is cheap, isn't it? As in all things, you must evaluate the need along with the price.


    ---
  • The Clipper algorithm, called "Skipjack", has now been released and cryptoanalyzed by the public cryptography community. Interestingly, it has been found to have possible exploits at 14 rounds, and has exactly 15 rounds. The question of whether the NSA has unknown technologies capable of breaking it at full strength is unanswerable, but suspicions are that the answer is "yes". But it currently cannot be broken via publically-available means.

    -E
  • not everyone is paranoid enough to check the authority signature. yes, they should, but i'm not sure everyone will go to the bother, or that everyone knows how. For a lot of people, the words "digisign security certificate below" followed by the results of a cat /dev/random ought to be enough that they will just _assume_ it's valid without ever bothering to check. I think that was what the original point in the article was.
  • Yes, but with operating systems such as Linux and OpenBSD where the source code is available, White Hats can look for security holes preemptively. With Windows, there might be huge gaping flaws that are known internally at Microsoft but ignored because "no one will figure them out."
  • One of the remarkably few papers presented by Microsoft at this year's Siggraph conference was on watermarking of models. The idea of watermarking, of course, is to embed some signature of the creator of the actor/model/whatever in the object itself, in a way that is robust. Hughes Hoppe, along with Emil Praun and Adam Finkelstein of Princeton, did indeed come up with a way of doing so that is all those things, by copying (extending? embracing?) a technique from the audio and image watermarking body of knowledge -- he encoded the watermark in the most salient features of the model, at various frequencies. The author of the model saves the original model, the watermark parameters, and releases the watermarked model to the customer. An interesting part of this, though, is that the watermarking algorithm -- or in this case -- the parameters passed to the algorithm, must be secret or it would be trivial to defeat. Furthermore, there is an interesting question when it comes to proving that someone has stolen the model. The original author must produce both the original model, and the watermark algorithm, to verify to the rest of the world that the suspect model is indeed a copy. Now, I would expect Microsoft (say) to object to this -- because if they release the details of the algorithm then anybody could defeat it. On the other hand, are you going to trust the alleged owner of the original when he says "Oh yes, the watermark demonstrates that it's my model. I'm sorry, but I cannot prove this to you." Even if they do that, it is difficult (though not impossible) to conceive of a watermarking system that doesn't allow post-hoc watermarking; that is, generation of a 'original' from the suspect copy. Hoppe does note this in his article, but it's just in passing. Finally, for you other Microsoft-haters out there, in Future Work Hoppe includes "an automated agent such as a web crawler to search for possible stolen watermarked documents." Here's the article
  • Wrt to daytime: Host requirements are something entirely different.

    If you completely disable ICMP, TCL will break, because PMTU discovery doesn't work anymore.

    It's safe to disable some ICMP types, but don't disable it completely. More info can be found here:

    http://www.worldgate.com/~marcs/mtu/ [worldgate.com]
    --
    OS lover
  • s/TCL/TCP/ of course. Preview is my best friend
    --
    OS lover
  • I'm not sure I completely understood what he was getting at with certificates, but an interesting social engineering "attack" using PGP would be to simply place a phony PGP signature in an e-mail that you knew was being sent to someone who did not have PGP. Or, for example, I could slap a PGP signature in this box, with the hope that no one would bother to actually go to the effort of checking it. Many people would simply assume that I was who I said I was simply because it "looked" right. I'd bet good money that no one would bother to check unless some other sign that the message was false appeared.

    Forgive my ignorance about certificates, but isn't the process that assigns certificates automated? If so, couldn't I get a certificate as "Microshaft Corporation"? How closely do people read those certificate pop-ups? If you're like me (I shouldn't admit this), you impatiently click "always trust" after a glance.
  • Reality is that there exist encryption algorithms whose output could only be decrypted if a) you have the key, or b) you have the mythical quantum computers. With current technology, using a large key with a modern algorithm like TwoFish is pretty much unbreakable. Even with technology envisionable for the next twenty years it will remain unbreakable.

    Public key (asymetrical) encryption is theoretically breakable via mathematical attacks, i.e., that a quantum leap in our knowledge of mathematics could render it worthless. That is because the two keys are mathematically related. There are algorithms available now that are stronger for a given key length than the popular NSA-backed RSA algorithm (see 'eliptical curve technology' for an example), but those simply use a different function to feed the primes into. One reason to use an asymetrical algorithm only to exchange the keys for a symetrical algorithm is that this reduces the amount of text transmitted via the algorithm and thus presumably gives less ability to break it.

    In reality, when faced with a 1024-bit key and a modern encryption algorithm, you're not going to decrypt it without the key. But of course there's always side-channel attacks. If you have physical access to one end of the system to be cracked, for example, you can always just "look over the guy's shoulder" as he types in the password unlocking his keychain and reads the decrypted text. This could be via electronic surveillance literally looking over his shoulder, this could be via sneaking a virus into his system that records his keystrokes and records bitstreams coming to and from the hard drive and floppy drive into his encrption program, or it could be via rigging the encryption program itself so that it will EMAIL you the key. Or it could be buying the key from him. In any event, this is by far the most likely way of cracking a secure cryptographic system -- the algorithms themselves are pretty much uncrackable, but key management is always a problem.

    And of course the strength of the encryption has nothing to do with the strength of the cryptographic system. For example, one version of MS CHAP encrypted the password and sent it to the server. But -- it did not encrypt a salt that had been sent with the server along with the password. Thus all a bogon had to do was sniff the encrypted password, use it himself, and voila, as far as NT was concerned, the bogon was you.

    -E

  • DNA computers, of course, solve this (and related) problems by replicating and pursuing parallel tracks, breaking up the solution space and searching it (in principle) exhaustively. For a sufficiently large problem, one therefore needs an impractically large quantity of resources to provide the DNA.

    Quantum computers (in THEORY, anyway - in THEORY, communism works...) can exhaustively search solution spaces in linear time, using nondeterministic effects of QM (so that the question P=NP is not answered, but they act as a nondeterministic automaton, so they can solve NP, or Nondeterministic-Polynomial problems in polynomial time - as you might expect). Whether there is a limit on how many tracks can be pursued, analogously to the DNA computers, is unclear at this time, since no working models of any useful size have been built. There is no obvious theoretical reason why such a barrier should exist, however.

    The applicability of approximate algorithms for solving the TSP to the parallel question of code-breaking is, of course, doubtful. In the case of the TSP, one has a metric by which "approximate" solutions can be judged - and there are algorithms which can guarantee a result within, say, 10% of the best possible answer. In code-breaking, however, a wrong key produces gibberish, and there are no "approximately right" keys which decrypt SOME of the information only.

    In other words, barring an engineering breakthrough in quantum computation, the problem P=NP is still the principal weak point in modern cryptosystems - one which remains unanswered. Assuming it holds, and no working, size-unlimited quantum computers can be built (for whatever reason, such as as-yet unclear theoretical limits to maintaining coherence) then increasing the key size will always move the problem of attack beyond the abilities of the largest available computer, while keeping "legitimate" (i.e. key-enabled) decryption and encryption feasible.

    Oh, assuming nobody invents time-travel, of course.

  • by Cramer ( 69040 ) on Tuesday August 17, 1999 @06:21AM (#1742993) Homepage
    This is certainly one of my pet peeves... IDIOT firewall admins who turn off ICMP entirely, or worse, turn off everything but ICMP echo. ICMP exists for a reason. Sure, there are some parts that can be turned off without a problem, but you need to understand what the f*** you're doing -- which is at least 105% of the problem.

    As for that BS about routers using "ping"... no they don't -- or more accurately, none that are worth their weight in twinkies. There are much better ways to judge distance -- oh, say, like the TTL in any IP packet. (Note: bind does this already.) Additionally, if you knew anything at all about ICMP, you'd know there is no (zero, none!) transmittion assurance for ICMP traffic. Nothing is going to alert you an ICMP message never got to it's dest. nor will anything ever retransmit an ICMP message. (RFC) Rule #1: NEVER SEND AN ICMP MESSAGE ABOUT AN ICMP MESSAGE.

    For the record, the parts that can be turned off without breaking the network stack are:

    ICMP_ECHOREPLY 0 /* Echo Reply */
    ICMP_ECHO 8 /* Echo Request */
    ICMP_INFO_REQUEST 15 /* Information Request */
    ICMP_INFO_REPLY 16 /* Information Reply */
    ICMP_ADDRESS 17 /* Address Mask Request */
    ICMP_ADDRESSREPLY 18 /* Address Mask Reply */

    Note: Turning off address mask info will break HP open spew.
  • Secrecy and obscurity are fundamentally different in an information-theoretic sense. If you want to ask whether a particular fact, compromise of which might break your system, is a proper secret or merely security, ask yourself this: how difficult is it to measure the *entropy* of that fact, expressed as a number of bits unknown to the attacker?

    If it's, say, a randomly-generated 56-bit DES key, then the answer is easy: 56 bits. If it's a 1024-bit RSA public key, then it's somewhat harder, but the answer will be around the 1000-bit range.

    If it's a passphrase, it's probably around 40 bits or worse - people are very bad at choosing passphrases, so some care has to go into making guessing attacks difficult.

    But if it's a particular implementation issue, or an encryption algorithm you're keeping secret, how big is the algorithm in bits? In other words, how big and how regular is the space of algorithms from which it's drawn? Are there perhaps a thousand algorithms that you might have been equally likely to choose instead (10 bits)? Or ten thousand, but some are more likely than others (13 bits or less)? It's almost impossible to make a sensible estimate, and so actually working out how much security you get from keeping it secret isn't possible. *That's* what security through obscurity is and why it's bad.
    --
  • Several years ago, I was about to buy a safe for my law office. As I lifted the safe off my shelf and on to the cart, I realized that it wouldn't be much use . . .
  • Someone, please! Moderate these posts up!

    Because of people blindly blocking all ICMP, PMTUD (path maximum transmission unit discovery) is horribly broken for a large percentage of the internet.

    The problem appears when you have a link between yourself and the destination that uses an MTU less than or not equal to (I can't remember which) the common 1500. What usually happens is small transfers work ok, but large transfers don't. So, you may be able to (for example) log in and get a directory listing from FTP sites, and even download small files, but trying to download large files just doesn't work properly. This has frustrated many people, as the problem is not easy to figure out. And once you do figure it out, the only way to fix things is to complain to the offending firewall operator, who will usually give a response like "Everything works fine for me. Must be a problem with your system.", or something similar.

    "You're violating RFC xxxx" is just no match for "It works fine for me."

    The only real solution is education.

    So spread the word. Save the net.

  • I suggest buying a copy of Bruce Schneier's book "Applied Cryptography". He also has a web site (http://www.counterpane.com ) which has a lot of information that has trickled up through the ranks since the time that he wrote his book.

    You are referring to what are called "side-channel attacks" when you talk about bribing the folks who do the encoding, and you are correct, side-channel attacks are the only effective attack against modern cryptographic techniques. You are referring to what is called a "man in the middle attack" when you set yourself up as the intended recipient of the encoded message, but there are known techniques for dealing with "man in the middle attacks" (read Bruce's book).

    A cryptographic algorithm itself does not rely upon obscurity. The data is not obscure, it is effectively randomized. It does rely upon secrecy, in particular, upon the secrecy of the key. But secrecy is a different thing from obscurity -- obscurity is hiding a needle in a haystack in hopes that nobody will find it, while secrecy is carrying the needle around with you in your wallet. The difference is that someone might accidentally stumble over the needle in the haystack, but the needle in your wallet is never going to be stumbled upon by any intruder. Of course, if you leave it sitting on top of your dresser (like most people do with their "secret" keys), it's no longer safe from somebody stealing it! But that's a different issue altogether.

    -E

  • While this was an interesting article, it wasn't a good defence against STO. The author appears to be arguing that STO, while isn't a good security defence, it can make a good security buffer. These are totally different things. So really he's saying that STO is good to have in your toolbox, it's not a good defence - this is what we've been saying all along.

    It's not what I've been seeing all along. I've seen countless articles and rants as to how STO is evil and should be completely abolished, since it keeps people from finding problems to be fixed. I consider this wrong. Problems and such should be fixed, and the system kept as secure as possible, but there is nothing wrong with using obscurity as an additional security measure. For example, running your FTP server on port 22875. Anybody who needs to access your FTP server knows what port it's on (or follows a link with that information in it), but the various script kiddies with their buffer overflow exploits scanning for open port 21's don't find you. You should still patch for all the exploits, of course, but this security through obscurity measure could buy you a few days, months, or years before one of your unpatched vulnerabilities (if you missed one, or are behind) actually gets exploited.

My sister opened a computer store in Hawaii. She sells C shells down by the seashore.

Working...