Secrets & Lies: Digital Security In A Networked World 77
Secrets & Lies: Digital Security in a Networked World | |
author | Bruce Schneier |
pages | 412 |
publisher | Johy Wiley & Sons, 09/2000 |
rating | 10 |
reviewer | Jeff "hemos" Bates |
ISBN | 0471253111 |
summary | A well written, well researched exploration of digital security as a system. |
I've recently had the pleasure of reading Bruce Schneier's latest writing effort Secrets and Lies: Digital Security in a Networked World. A number of our readers may remember his prior book Applied Cryptography , which discussed the use of cryptography in our brave new digital world, and how the use of crytography would make things secure.
This time around, Schneier is much more cicumspect about the uses and application of cryptography. As he states in the introduction and throughout the book, when writing AC, he thought that the use of cryptography would make things more secure. He was correct - but the lesson he learned while working with companies and individuals, that we can't just add cryptography into a system and make it secure, but that systems must be designed from the bottom-up with security in mind. S&L draws upon a huge amount of experience working in the security field, making one central point: Any system, no matter how good the cryptography is, is only as strong as the weakest link. Yes, that's an old cliche, but it's one that bears repeating.
What makes it even more imperative to design system to be secure is the sheer amount of systems that aren't secure, and what the means for us. Some of the examples Schneier uses in S&L are simply frightening to consider were they to occur. And some of his ideas about what will come, and the tools we have will make you want to keep a good stash of gold kruggerands under your mattress.
Indeed, as he talks about in the introduction, part of the reason this book too so long to write was because he was depressed at the world of security around him. Looking at what companies were doing, at what people were doing, and the sheer amount of systems holes out there must be depressing - but it only drives home the point even moreso that we must design *systems* not just adding cryptography and thinking that's the magic pixie dust that can make everything better.
The book does an exceptional job of wending its way through various security measures, how they work, and how they fail. IMHO, one of the real strengths of this book is that it's something that a cryptography novice could read, as well as an expert. Certain sections of the book are dedicated to the nitty gritty behind systems, but there are also sections that are dedicated to simply laying out the process by which one should approach the systems. Indeed, the support blurb on the dust jacket is written by Jay S. Walk, the founder of priceline.com. This adds to the strength of the claim that the book can be for everyone.
Schneier is intimately involved with the security community - besides being the creater of the [Blowfish] and [Twofish] encryption algorithms and a frequent speaker at technical conferences, his company deals with this day in and day out. More to the point for a book, he can also write. It makes reading about Product Testing and Verification (Chapter 22) rather than a snooze, a treat. The book is one of those rare cross-overs - something to give your geek friends, and your [PHB], all of whom will appreciate it. The breadth of the book is revealed in the contents (Duh) and it's a good mixture of all the necessary elements. You'll learn about entropy in a system as well as Attack Trees, Threat Modeling and what all of this stuff means in day-to-day life.
I wholeheartedly recommend this book.
The Table of Contents and the preface are available on Counterpane's site; S&L's Chapter Three is on Amazon.
Purchase this book at ThinkGeek.
Re:Anyone noticed? (Score:1)
20,000 for a review. Cripes - try 30$. And of course I don't post negative reviews - I prefer to promote, rather then demote. If we had to demote, it would mean half of the stories would be book reviews.
Re:Anyone noticed? (Score:1)
Re:Product placement ad? (Score:1)
Wow, that was fast! (Score:1)
However ... (Score:1)
I'm not nearly so impressed with this book, as everyone else seems to be. I have 2 issues:
I usually love Schneier's stuff, I just think that the market for this book isn't knowledgeable geeks.
--
Re:Anyone noticed? (Score:1)
Re:wrong definition of closed. (Score:1)
I see... well I tried.
"Free your mind and your ass will follow"
What we're waiting for (Score:1)
.
.
Chapter 5 - How to Spot a Troll post
.
.
Re:Security isn't important (Score:1)
Re:He knows cryptography but doesn't know programm (Score:1)
Unfortunatly I belive you are wrong - all code within the (Linux) kernel operates with root priviledges.
Moreover as these things operate within the kernel you can pull all kinds of tricks to keep them hidden.
Take a look at this [pulhas.org] link for a nice discussion of this issue.
Re:Security isn't important (Score:1)
Somehow, I don't think that would deter the danish students who try to break into my company's systems...
Re:Security isn't important (Score:1)
and, even if there was, unless you lose a lot of money, the FBI simply isn't interested.
for example, domain hijacking via Network Solutions crappy email verfication system - due to their weak security (and my not knowing enough to choose the better methods that they do offer but don't promote) i lost my web site for almost a week while NetSol sat on their thumbs. when i called the FBI to see about finding the asshole who stole my domain, they flat out told me that they would not do anything unless the losses exceeded five thousand dollars. this is certainly not a lot of money to a large company, but it's a whole month's worth of business to me.
so, to put it simply, the FBI isn't interested in people who give you headaches. they will only act when there is sufficient money at stake.
Re:I agree...but (Score:1)
Just to be pedantic, that was Arthur C. Clarke. But it's ok, he wrote some good Sci-Fi too.
Re:Security isn't important (Score:1)
And the best and 100% sure way to achive that, is to have no valuable data at all ! That way, when do steal your worthless data, they will actually have spent more on breaking in that it was worth it.
Even better, if most of the people would do that, crime would dissapear completely, since there would be incentive for anyone to steal. Of course, people might not be so happy to have nothing that is usefull, but you must sacrifice few valuable things to achive such a great goal.
Re:Nice plug. (Score:1)
Re:A is not for Amazon (Score:1)
A is not for Amazon (Score:1)
For example "Camilla P rker Bowles", and "ny" instead of "any".
Security isn't important (Score:1)
There's a whole lot more to running a company than systems security. Security is one factor, kind of like an insurance policy. Who cares about the security of a system, if all the potential hackers are scared to death of being sued for every potential intrusion? To use an analogy, why lock your doors when you got a yogurt-fed pitbull waiting on the porch?
As a matter of fact, I think we should extend that concept to personal security. What we need in North America is a law that allows personal users to sue hackers who portscan your computer. Do you really think they'd be as eager to fuck around with your system after that?
I think hackers will think twice about trying to break the law by electronically breaking and entering once a few of their teenage pals are sent to jail by the FBI.
Re:Security isn't important (Score:1)
Who cares about being sued? Very few people, actually, unless they're going up against deep pockets corporations who seem to LOVE to sue everyone to get their way. Start a law suit over a port-scan? That's like shooting someone for peeking into your car windows in the Safeway parking lot. And, personally, I seriously doubt some script kiddie in Turkey (to grab a country out of the air) has anything to worry about worse than losing his dialup-account.
And lock your door. The pitbull likes hamburger...
Re:Social Engineering (Score:1)
My bank works by identifying you by phone using a challange-response mechanism. The clerk on the phone CANNOT access any information about your account except for your name until he or she enters the correct response to the challange into the computer (which blocks your account after two unsuccessful attempts at that, requiring an alternativwe method).
Is it possible to go along these lines and plan a system in which the human factor cannot affect security?
Cling goes my 2c...
Re:The Code Book (Score:1)
"Ooo look, One Click Patent! We're freakin geniuses. Let's make everyone pay 100 BILLION dollars to use our patents!"
Re:The Code Book (Score:1)
A few shortcomings of the book (Score:1)
I haven't finished it yet, but the section on access control was a relatively unhelpful mix of theory and practice with some confusion. E.g.:
p. 124 "In Unix, ownership is per file, and is usually determined by which directory the file is in."
The ownership may be statistically correlated with direcory ownership, as in any system, but never "determined" by the direcory.
On p. 100 it says that a 128-bit symmetric key "will be secure for a millennium". I thought that quantum computers (if we figure out how to make them in the next millennium) would mount a very plausible attack on 128-bit keys, and I assume this is why AES specifies a 256-bit key option.
On page 74 a distinction is drawn between integrity (accurate copying) and authentication. But the examples on p. 75 demonstrate the very confusion that was highlighted. Kurt Vonnegut's "1997 MIT commencement address" was an authentication problem, not an integrity problem, since the speech was presumably not modified since its creation by Mary Schmich. The same goes for the next two examples on that page.
On p. 103 it describes most residential locks as having 5 pins, 10 positions per pin => 100,000 possible keys. It would be fun in some medium sized group (about 350 keys, less than 100 people?) to look for a key collision - I think the odds would favor it.... A bit unnerving also.
An example on p. 109 describes, by way of analogy, physical certified checks as an element in a protocol which helps with selling cars. It makes me wonder how one would really go about verifying that a "certified check" is actually worth anything.
None of this detracts from the main points of the book. Again, I haven't finished it, but I don't see much warning about ineffective security decrees that just waste people's time. E.g. I often see commercial security organizations mandate time-consuming or inconvenient practices to guard against threats that pale in comparison to other threats which they ignore. This can hurt the security of the bottom line without really improving the security, and without a positive bottom line, the security of the assets can become irrelevant.
He clearly makes the point that security is a business decision, but some examples of the sort of unnecessary "security" overhead that just wastes time and resources would add a lot.
--Neal
Same Price at Fatbrain (Score:1)
Crypto Newsletter (Score:1)
Re:Security isn't important (Score:1)
Re:Security isn't important (Score:1)
I must agree. Ever since they made murder illegal, it has virtually stopped. I routinely walk around harlem at 3am just to watch the would be theives. Theif: "Danm! Shooting him and stealing his wallet is ILLEGAL. Remember when joe got arrested 3 years ago? Don't wanna end up like that!"
Yup. What this country (any country) needs is more legislation. God only knows how many times I think about speeding, only to remeber my friend who got a ticket. Poor bastard had to pay fifty bucks. Brings tears to my eyes.
A lot of hackers (bad intentioned, of course) have been hauled off. That is what supports the image. Your typical port scanning script running 3733T h@x0r is a 13 year old 50 lb. geek getting his/her jollies off by doing something illegal, and getting away with it. He/she would probably be smoking a doobie and underage drinking instead, but doesn't get invited to those types of parties.
You also have pros out there who are blackmailing/embezzeling funds. What they are doing is already illegal. Making it more illegal does nothing. Extra charges in court cases are used as bargainig chips toward the big charge anyway.
A stupid law will do nothing. It will encourage punk kids, and be a joke to the professional theif.
Excuse me while my renegade team of bikers cuts the tags off of mattresses.
Slashdot Repeat!!! (Score:1)
Re:Security isn't important (Score:1)
Having done the honest communication, here's the flamebait: I am really sick of the attitude that every problem can solved with enough force and nastyness. We're notorious in the U.S for our over-reliance on intolerant laws, official and private violence, and other excessive forms of "deterrence." Such measures create mostly a hormone-based illusion of security -- and cause a lot of harm in their own right.
By the way, starving an animal to make him nasty is illegal in most places -- and morally dubious everywhere.
Re:Security isn't important (Score:1)
Because at some point, we have to collectivly realize that wanting lawyers and the FBI to babysit us can never end well. While I agree that security isn't the answer to everything, neither is legislation.
who's to blame for hacking? (Score:1)
Instead of aking hacking illegal, maybe laws should be set such that being hacked is illegal. Administrators and software vendors should be made accountable for what they are paid to do.
such a situation would force industry to take care of themselves instead of being crybabies everytime someone enters their system.
but i guess it's easier to leave things the way they are ... easier to blame teenagers for problems than the people that set it up.
hardest word in the world?! (Score:1)
What's more I believe Arthur C. Clarke dreampt up the concept of the geostationary earth orbit communications satellite (two Ls, one T near each end)... OK I work in sat-comms but it seems like no-one can spell the word.
Advertising ? (Score:1)
stop with banner ads
Re:Anyone noticed? (Score:1)
Re:Anyone noticed? (Score:1)
Barnes & Noble carries the same excerpt, and I don't get the impression there is a huge anti-bn sentiment on Slashdot.
Re:Product placement ad? (Score:1)
Re:Security isn't important (Score:1)
Re:I agree...but (Score:1)
I am partially through this book and I have to agree that even though the points are valid, it leans towards fear mongering in tone. I don't think this is based on his mood so much as his business. In this slashdot article there about an "article" by Bruce is more about selling his security company than how to handle the problem. [slashdot.org]
The Mitnick Equation: Case Law (Score:1)
when i called the FBI to see about finding the asshole who stole my domain, they flat out told me that they would not do anything unless the losses exceeded five thousand dollars
Don't forget that case law now provides that you can calculate the cost of your losses bases on the expense of re-engineering the HTTP protocol from the ground up, just as per the equation used in the Kevin Mitnick case. Your losses do not have to have anything to do with reality. Law is the "res publicas" that applies to all, not just the rich.
Re:Anyone noticed? (Score:1)
Cryptography and Security (Score:1)
Even AI relies on the innate structure we give it: When we design computers to design computers, they're still built with certain limitations in what is possible for us to conceive. The idea of security is kind of like the idea for perpetual motion--the objective exists in a different realm than the purpose. I can see how the author would become depressed over this, for it seems to be a dilemma. It is actually only a reality which must be understood before approaching the unapproachable, like perfect security. I will be interested to read the book to see how he approaches this dilemma. -Water Paradox
shopping with Amazon is 100% safe. Guaranteed. (Score:1)
Shopping with us is 100% safe.
Guaranteed.
Security - more than just computers (Score:1)
Bland statement? Probably so, however this sort of thing is important: I like my private data secure and in an increasing climate of your data being stored on various networks, you should too.
The problem is that _computer_ security is not enough. Unless staff who work in offices where networked computers are available are similarly vigilant (particularly non-open-plan offices) anyone can just walk in and get access by looking for an unnoccupied office where someone has logged in and gone for lunch, leaving their level of access freely available.
In the company I worked for during the summer the security ran on what I like to call the 'sea urchin' model (there is probably a formal name of which I am unaware); tough as hell on the outside but gooey once defences are breached.
Exampl: Okay, you've been changing the layout of one of the databases and you're logged in with root access. You go for lunch but get sidetracked into an inpromptu meeting. Someone with mischeif on their mind walks in and then can do some unpleasant stuff - or gain access to privileged data. Unrealistic? Alas, I think not.
Elgon
Re:He knows cryptography but doesn't know programm (Score:1)
Re:He knows cryptography but doesn't know programm (Score:1)
sequence_man, you appear (to my eyes) to be as foolishly idealistic as Schneier now says he was when he wrote Applied Cryptography. Where Schneier was moved by his faith in the theoretical perfection of the mathematics, you seem to be standing on the rock of software engineering. This is indeed a dubious foundation.
I think you have missed the vast bulk of what Schneier was trying to say with Secrets and Lies. You've focused entirely on the idea that 1) because code is very complex and constantly growing morseso, therefore 2) there will always be bugs, and therefore 3) we can't have security. I think Schneier did make a point somewhat like that, but that's really one of his minor points: a bit of shrubbery in the forest you have overlooked.
If I had to summarize the book's main points in one sentence, it would be this: Security in the real world is hard, because it deals with many things, most of them complex, few of them subject to the kinds of precision or rules that you would hope for or expect from things like mathematics or programming languages.
Schneier spends several pages (a good part of one chapter, actually) talking about programming, the size and complexity of programs, bugs, and the resulting security loopholes. But this is hardly his main thrust; he simply uses it to underscore what he repeats and emphasizes all along: security is hard, harder than most people think, and must deal with many things, more than most people have thought about.
Good software engineering can in fact result in more secure software. Principles such as you alluded to (modular design, isolation of security-related functions, etc) are great, and we need more software designers and writers following such principles. Software should be designed with security in mind and written following good security practices. The shameful fact is that very often it is not.
However, even if the world were to wake up tomorrow and begin writing software that properly integrates with system security facilities, checks results for boundary conditions and general sanity, eliminates buffer overruns and race conditions, and the hundred other things you can find in various guides to writing secure code (see David Wheeler's Secure Programming for Linux and Unix HOWTO at http://www.ibiblio.org/pub/Linux/docs/HOWTO/other
You see, security isn't about secure software any more than it's about secure cryptographic protocols. It's about those, and more. Security in the real world has to consider the following: physical security of computers; strength of passwords; correct installation of secure software; correct use of secure software and secure systems; correct administration of secure software and secure systems; proper permissions on files; power outages; backups; secure backup storage; bugged phones; bugged offices; bribed secretaries; bribed system administrators; bribed CEOs; people with guns; people with bombs; people with rubber hoses and brass knuckles; people with the key to your server room; users that don't understand security; managers and financial backers that don't understand security; system administrators that don't understand security; and finally (although there are many more things I could include in this list) programmers that don't understand security.
If you think that any of the above aren't really important, and that all we need is good, solid code that comes from good, solid coding practices, then you are thinking of security in a very limited context. This context may apply to your circumstances, but it is much smaller than the real security needs of a great many people.
Granted, most people don't need to worry about everything I mentioned... probably we can knock "men with guns/bombs/rubber hoses" off the list for the majority, right off the bat. But every secure computer system needs to be administered properly or it may well have no security whatsoever. Note that I'm not necessarily saying it needs to be actively administered, in the sense of being constantly monitored by Schneier's new company or some such; simply that a computer system needs to be, at a bare minimum, installed and configured securely at the outset. You may be thinking: "Well, duh, that's such an obvious notion that I didn't mention it." But this is precisely the kind of administration that is so lacking in so many computer systems today. If it's not explicitly considered, it will be overlooked, and if it's overlooked, there will be no security. And it gets overlooked all the time, time and time again, often even when someone does take the time to explicitly consider it.
Proper system administration (or even simply proper system setup) is only one more thing to consider when evaluating or constructing some kind of secure system. And it is only one of the other things that Schneier discusses in the rest of his book.
Schneier discusses and explains much more than poorly written code. He covers pretty much every facet of information security that I've yet encountered, including fundamental concepts like security in depth, risk analysis, threat modeling, and other phrases that Highly Paid Security Consultants like to toss around (but please note that just because the phrases are overused and under-understood by people with too-large hourly rates and too-small reading lists doesn't make the concepts themselves useless). He provides practical insight into things like passwords (what they are good for, what they are not good for, how they can be chosen and stored and used and the limitations of different ways of doing so), cryptographic algorithms (ditto), smart cards (ditto), and various kinds of security software (firewalls, scanners, IDS, and so forth (ditto)). He talks about end-users and how their behavior can compromise security. He talks about similar issues with system administrators, bureaucrats, police, and criminals.
Anyone contemplating buying or reading Secrets and Lies should be aware of a few things. First, it is not a hard-core technical book. There's no code, no algorithms, no configuration step-by-steps... there's not even any math! It's not written for a technical audience. Actually, that's not quite right: it's written to be accessible to a non-technical audience; there's a difference. Speaking as a techie, I eagerly devoured the book and was left quite satisfied. It covers my favorite subject (computer security) broadly and thoroughly even while omitting details that are only of interest to someone doing implementation (writing code, configuring systems, etc). It's a book that you could give to management if they wanted to know more about this "security stuff" you keep going on about. If they weren't inclined to read it, reading it yourself would make a good preparation for giving a presentation to management. While non-technical, it is not dumbed down in any respect.
Second, if (like me) you've read almost everything Schneier has written on his website, or have been working in or studying computer security for a couple of years, you won't really learn anything from reading the book. There's nothing ground-breaking, nothing revolutionary. But it is an excellent compliation and presentation of the things we all know, or should know. It may help you gain some focus on your current security problems, or put a large security project in perspective, or inspire you to do something specific you weren't considering before. If nothing else, Schneier's writing style is enough to make the book an entertaining read.
Third, you might get the impression that Schneier is writing this book simply to get customers for his new security monitoring business. I'm going to suggest that both his book and his business spring from the same source: his interest, research, and work in the field of both cryptography and, more and more as time has passed, security in general. Schneier is certainly qualified to write a book like this, and the book stands on its own as both informative and (for us computer security wonks) entertaining. If you are concerned that the book is nothing more than a protracted pitch for his new company, simply tear out the last few pages of the last chapter of the book, as that is the only place he mentions it.
You might legitimately be worried that with his new business, Schneier has an interest in overstating the security risks of computer systems. That may be so. But I don't believe that any of the risks he discusses are overstated in the least. In fact, several times he talks about the importance of not overstating security risks. Proper security stances, he argues (and I agree), result from understanding what threats you are actually facing, the importance of what you are protecting, the expense (in time, effort, and money) you are willing to incurr in protecting it, and the loss you are willing to take in failing to protect it. Schneier is firmly on the side of reason, not hysteria, and makes that clear many times throughout the book.
If you are interested in learning more about computer security, I would wholeheartedly recommend Secrets and Lies for foundation and philosophy, along with Practical Unix and Internet Security by Garfinkle and Spafford for practical advice and instructions. But I would urge you to read more than just the chapter about programs and bugs in code, lest you end up thinking, like sequence_man, that "security is a solvable software engineering problem". If you truly believe that, you will end up writing code in isolation with no consideration of other aspects of security. You will write code that has no practical use and makes no meaningful contribution to security in the real world.
Eddie Maise
Who knows kackers? (Score:1)
Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife s teeth. A good hacker would have counted his wife s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.
Who knows hackers with wives? Or girlfriends?
That was one of the best things about this book (Score:2)
This is old news and most people with an active interest in security are yawning by that point. Secrets & Lies isn't aimed at security professionals - it's aimed at everyone else using the Internet. Most people don't know that terms like "128-bit encryption", "SSL" or "firewall" mean absolutely nothing on their own. This book does a wonderful job of debunking a number of security myths.
I'm strongly opposed to the idea of requiring any sort license to use the internet. I still found myself thinking that one redeeming value of a licensed Internet would that people could be required to read this book. Most people are entirely too cavalier about security, largely because they don't know any better.
Re:Cryptography and Security (Score:2)
we literally cannot conceive of something which is entirely self-sustaining
hmmm... What about those blown glass spheres that contain a complete balanced biosphere? They've got like little shrimp and plants and water and air in them... no holes.
"Free your mind and your ass will follow"
Re:Security isn't important (Score:2)
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Product placement ad? (Score:2)
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Re:Security isn't important (Score:2)
So I suppose if I catch you touching the handle of my car door as you pass by it on the street, we should lock you up on felony charges?
I don't get Americans like you. First you run around screaming "theyerawtabealaw" every time you perceive a problem, and next thing you know, you're screaming about the gubermint on your back.
Real democracy is hard, and it means spending a lot of time working for solutions outside the system, not radically curtailing freedom. Go live in a police state if you don't like it, but don't ruin my country.
Boss of nothin. Big deal.
Son, go get daddy's hard plastic eyes.
Re:Social Engineering (Score:2)
Re:Security isn't important (Score:2)
Re:Product placement ad? (Score:2)
one made by a lot of people over the last few years: the mathematics
is exciting, showing the existence of secure encryption and security
protocols. After having digested that, you realise the system
engineering is depressing: modern computer systems and actually
depolyed protocols are highly complex and general, and it is normally
impossible to be confident that a system behaves according to a
protocol.
Re:Security isn't important (Score:2)
On top of that, if somebody can always get in, they can always hunt you down and apply rubber hose cryptanalysis. They can do that even without breaking into your machine, and there's not much you can do if an attacker is that determined.
At some point, it comes down to actually having to trust something. Do you trust that your client has not been compromised with a program that will pass any cleartext it sees on to your competitor (or the Mafia, or the government)? Do you trust that your network will deliver a reasonable fraction of the packets you send out? Do you trust that your encryption and authentication are difficult to defeat?
This isn't very surprising, either; you just usually don't think much about trust issues in the real world, because anonymity in the real world is much harder to attain. Do you trust the guy across the street not to pull out an Uzi and try to mow you down? Most people do, because there are generally compelling deterrences against that (based primarily on the ease of being identified and the difficulty of leaving the scene without leaving indentifiable traces). Some people don't, so they either stay inside or only go out in the company of bodyguards. It's a decision that must be made on a case-by-case basis, even if you don't often think about making that choice.
So one of the real tricks in the online world is finding the right balance between anonymity and identification -- some balance point that protects both the victims and the victimized. This is not the only tricky thing about online security; actual bug-free (or security bug-free) code is at least as important, and the code cannot have any more security than the protocol.
Most of the IP-based protocols used in the world today ignore all of these in favor of features and glitz, because they're driven by commercial decision processes, but there's always hope that newer protocols will be adopted and that they will supplant the insecure protocols that come out first. (For more on that, I refer the reader to the classic "Worse Is Better" paper.)
I agree, a great book! (Score:2)
This is just one of the many revelations and insights Bruce has to offer in this very well written book. I learned about threat trees, the true nature of the security landscape, ... and so much more it's amazing.
Buy it, read it, twice. (I'm about to start again)
--Mike--
Re:Always know your sources (Score:2)
He knows cryptography but doesn't know programming (Score:2)
Here is my rather long (and negative) review of Schneier's book. For those unwilling to wade through it, my key point is that being a good mathimatician doesn't necessarilly qualify one to be a good programmer. He truely doesn't understand programming and hence doesn't believe that a single secure piece of software could ever be written.
After writing a wonderful book on Applied Cryptography, Bruce Schneier lost his faith in mathematics. This loss of faith came from looking at truly applied cryptography, namely looking at actual source code. This code so scared him that he wrote a book saying that cryptography is not The Answer(tm). I beg to differ.
He thinks real code should scare you so much that you should hire his company on continuously monitor your computer. Not a onetime vetting -- you should pay him every day for the rest of eternity. Not a bad racket.
The key points he makes are as follows:
But code doesn't have to be written this way. You can put all of your risky code in a small enough package that it could be checked for errors. The word kernel comes to mind. :-)
But, he also says that Linux suffers the same problem that MS has.
Unfortunately, I don't know the kernel well enough to comment on how
much it has grown and he doesn't provide data on the growth of Unix
kernels. Somehow I think this absence of information might reflect
the fact that the current Linux kernel is not 1000 times bigger than say a
Solaris kernel of the early 80s.
But under Linux, is even counting the lines of code a good measure? Somehow I think the kernel is modular enough so that if I load a new PCMCIA module, it wouldn't automatically be given rights to read and write to arbitrary files on the system. Please correct me if I'm wrong, and I'll sleep much less well at night. So not all the code in the kernel should be counted as being the same.
I would be much happier with his analysis if he had looked at contrasts between sendmail (which is notoriously buggy) and qmail (which doesn't appear to be as buggy). The software point is that the dangerous code in qmail is all in one program--and that program doesn't trust any of the other pieces that make up qmail. So if that part is actually programmed bug-free, then there shouldn't be ANY possible bug in the rest of the code that can undermine security. This is good design. Almost any line of the sendmail program could undermine the security of the system. This is a truly a monolithically bad design.
To list another example, almost all of current open source pgp (namely gpg and its supporting material) uses gpg for the actual encryption and some other program for the viewing. So no matter how stupid the viewing program is, it is impossible for it to undermine security. (OK, it could send a copy of the plain text after it sends a copy of the encrypted message--but that would be a easy bug to catch.)
Even viruses like the ILOVEYOU worm in a hopelessly insecure operating system should be fairly easy to avoid. If you simply had Visual Basic always run in something equivalent to a change-rooted environment, it would have been impossible to write such a virus. Whether this could be done in windoz isn't the issue--instead the point is that people have known about this sort of problem for years and there has been a simple fix for years.
In his defense, he does seem to spend most of his time working in the MS world. That he worries about someone running a game server on their machine without having vetted all of the code would be a very rational worry in MS. But, there are ways this could be done under Linux that would maintain total security of the machine it ran on without looking at even one line of source code (run the program as a regular user in a chroot environment sounds safe to me).
So I see the picture something like this.
Schneier is incorrect when he says that security is a process. Instead, security is a solvable software engineering problem. In fact, I think a few small pieces of it have actually been solved. I think mail handling has been solved (qmail) and telnet has been solved (ssh2 with public/private keys). Certainly serving static web pages is solved (apache).
Keeping users from getting the root access should be a solvable problem, but I don't know if it is currently solved or not on Linux systems. Once that is solved, serving CGI scripts, running arbitrary servers, downloading arbitrary code off the net and running it on your local machine should all be safe things to do. (Now I don't think the automatic updates to GNOME is going to pass security muster anytime soon.)
So let me make a statement that most clearly separates Schneier's position from my own. Consider the following two systems:
If you went with NT, read Schneier's book. He will give you good arguments to believe that active management is the only answer. If you went with a limited Linux system, then join the open software movement and see if we can add more features to the Linux box without compromising security.
Re:Face it. (Score:2)
Quite opposite of what you thought I meant, I think that the first step to improving digital security is to become more proactive than reactive to security.
The point my above post was trying to convey is that we're currently content to say I've secured my system against all know attacks. So what? Sometime down the road their will be a new hack out there and your system is going to be vulnerable. Too often, I've seen people like this. Security is an ongoing process that will never be complete.
I never said we shouldn't try to make better security. I just meant to imply that we will always find a way around security. Take DeCSS for example. The MPAA convinced all the movie studies that CSS was uncrackable, that's why they adapted it as the standard for DVD. Now look at it. The MPAA is spending billions in a losing battle to prevent people from decrypting it.
Never get sucked into believing that security makes your system secure. There will always be talented people out there putting in the effort to prove you wrong.
kwsNI
wrong definition of closed. (Score:2)
Good and Bad (Score:2)
That being said, there are some nits I have to pick. I disagree that the book is "well researched." The important stuff is all obviously drawn from his own experience (obviously extensive) as a security consultant, supplemented with various anecdotes from the web, journals, etc. A useful knowledge base, but not that impressive researchwise.
This is aggravated by Schneier's use of non-technical examples and analogies in many of his arguments. The arguments themselves are very strong, but when he cites this historical example or that financial practice, he often gets his facts wrong. I don't suppose this has a big effect on his credibility, but it must have some.
It's also a little disappointing that Schneier didn't bother to get into the general history of the Engima/Ultra business -- a prime example of his basic theme, that the smallest failure of the security process is vulnerable to machines with infinite patience.
Finally, I'm very, very disappointed that Scheier fails to challenge -- and sometimes even supports -- the social conservative attitude towards hacking and reverse engineering. He points out the futility of trying to encrypt DVDs -- but barely touches on the DMCA. He speaks of general software hacking as a basically benign activity -- but he supports criminal punishment even for the most non-invasive electronic "trespass". This is a point of view utterly at odds with his ideas of security considered in a complete social context.
Chicken Little (Score:2)
Yes, it is impossible to secure a system completely - every security system has a human element, and every human element has a head that can have a gun held against it.
Its sort of amusing that Bruce took this long to come to this realization.
Don't mind the doomsayers - if you follow sound security policies and practices, you will likely be okay. If not, well, that's why you buy insurance.
The Code Book (Score:2)
Nice plug. (Score:2)
I agree...but (Score:2)
I am not a security consultant, nor am I an uber cracker. I am a programmer who due to middle management mismanagement is often forced to worry about network security a lot more than I should, but I digress.
I found a lot of the major points were identical to many things I learnt at university and was dissappointed by the scare mongering that the author employed. I would assume that anyone who reads a book regarding advanced security policy implementations would be quite aware of all the shortcomings in a lot of organisations systems and thereby doesn't really need to be bombarded with anecdotal nightmare scenarios.
I am disappointed by some of the sensationalism which is being used by a lot of "security" books.
There is a new genre of books coming out, pop-tech. Books which are half based on sound technical information (albeit derived from academic channels) and popular culture semi-futurism/apocalyptic tea leave reading.
Whilst this book is on the softer end of that spectrum it still falls into the same band.
Having said that, a lot of good points are raised and it will jog your memory, but if you want a sounder perspective on security and encryption I suggest you go to your local university bookstore and get a book which was written by someone who doesn't run a company in the field and doesn't need to drum up business.
If you want the futuristic tea leave reading then perhaps asimov is more your thing
This book is somewhat of a hybrid (as mentioned before it is by no means the worst offender but it is definately in the same cell block) please don't try to develop security policies for you organsation solely on this book. As the tea leaves will end up telling the truth.
Re:However ... (Score:3)
Why do you assume most geeks know this kind of thing well? I know several geeks and all of them know little about cryptology. As long as it works they're happy and they continue on with what ever they're pet project is. It never hurts to review the material.
As for grok I wouldn't consider that just a geek word. My parents know it (sure they don't use it but they know it) and they do not read science fiction at all. Sometimes you would be suprised at what you think is a word or concept non-geeks don't get that they do understand.
Re:Product placement ad? (Score:3)
Cheaper at buy.com (Score:3)
~Sean
Face it. (Score:3)
The only sure-fire way to make a system secure is to remove every input (KBD, Mouse, Serial, Network, Parallel, USB, etc.) port from a system.
kwsNI
Re:Security isn't important (Score:4)
It's biggest success has been in that Russia and the US haven't nuked themselves to oblivion and back. Yet.
However, it's failed abysmally in just about every other sector of life. Tough jail sentances, guns, the death penalty, etc, have usually attracted more crime because they delude people into feeling safer than they are (thus reducing any REAL defence) and increasing potential rewards (due to rewards often being proportional to risk).
No. The best defence is NOT deterence. Even a pitbull can be distracted, fed, drugged, trapped, etc. The best defence is to stop making pointless assumptions.
In computing, true security comes when you can say (HONESTLY) that your server box is untrusted, your client box is untrusted, your network is untrusted and you can STILL store and move data securely.
Security isn't about stopping someone getting in. Someone can ALWAYS get in, if they try hard enough. Security is about knowing that once that somebody -DOES- get in, they can do nothing with or to your data that isn't possible by anyone outside the machine, anyway.
A locked door is no guard, and an alarm can always be bypassed. If you put your valuables in plain sight and easy reach, the best deterence in the world can only buy you time, not security.
"Security is only as strong as its weakest link" (Score:4)
Though I believe there is a lot of truth to that statement, I've also seen it applied to an extent that it hurts overall security. Generally speaking, this world is not nearly so simple. Where systems break, it generally involves a failure on multiple levels. For instance, look at the numerous social engineering scams. Rarely is it just ONE person that broke the entire security, but rather a bunch of different people within the target organization being too careless with information. All those careless bits, in turn, interplayed with one another and allowed the crackers to build up the key to access the desired information. The point is that if each person were just twice as aware as they were before, that could go a long long ways in preventing hackings. The same goes for the vast majority of the hacking incidents. Rarely are they some strike of genius on the part of the hacker, seeing things that no one has seen before. Instead they're previously documented things that could've been avoided with reasonable effort.
I sincerely believe that effective security is attainable, provided enough effort is put into it, even though one may never be 100% theoretically secure. That is to say that if all the key players involved simply payed more attention to security, actual instances of hacking secured sites would be rare. Let's say we have two major layers of security. Each layer had 50 trained professionals go over it for all known bugs, and for anything theoretical they can provide. Assuming the organization keeps up to date on emerging threats, and monitors its security system, and if the source and specific specs on the protocol are closed, the odds of a hacker DISCOVERING two bugs in both layers that none of the pros saw is quite slim [about as good as a gaurantee as you can get in life anyways].
Unfortunately, the only standard that most people have to compare it with is nothing even approaching that. For instance, almost every single operating systems (yes, there are one or two exceptions), including the linux distros, have shipped with well known exploitable bugs. They may not have known there is that specific bug in that specific package/module/whatever, but if they had really double checked for existing bugs, it'd never have shipped like that. Likewise for most of these hacking incidents. They're well documented techniques simply being reapplied. There is simply no excuse for it. One may not be able to gaurantee that no bugs exist, but they can certainly gaurantee that certain conditions don't exist.
/END RANT
Also recommend: Information Warfare and Security (Score:4)
Social Engineering (Score:5)
I just finished reading this... (Score:5)
An interesting thing about the book is the contrast between two concepts that may seem contradictory at first: Security is only as strong as it's weakest link, but layered security can result in stronger security than any one part.
It's like the difference between logical AND and logical OR. You want to build security systems where to break it, an attacker would have to break through a firewall AND steal a password AND get root access from user access AND evade the network monitoring system, etc. This is security in depth - stronger than any one link.
Unfortunately, most systems are designed as logical OR: To break the security, the attacker just needs to penetrate the firewall OR steal a password OR buffer-overflow a CGI script, etc. This is "weakest link" security.
Other things that stuck in my mind from the first reading: No matter how strong you build it, someone will eventually break it. So design it for easy recovery. The CSS system on DVD's is an excellent example for this - now that it's been broken, there is no good way for the DVD manufacturers to recover. They can't change the encryption system without breaking compatibility... (The CSS/deCSS system is actually used as an example several times throughout the book).
I highly recommend this book. I'll probably reread my copy several times.
Torrey Hoffman (Azog)
Always know your sources (Score:5)
Personally, I think Bruce is an upright guy, and that what's happening here is that at each point in time he both writes about and works to address the problem he actually believes in. But it always pays to pay attention to the potential biases of your sources.
Risk and the Blame Game (Score:5)
Of course, it takes more than a thick skin to deal with this environment. You're diametrically opposing developers - you want things secure, they want things to work (the inverse relationship of functionality vs. security). The trap here is becoming this big obstical that developers have to figure a way around to get "real work" done. We avoid that.
In our environment, information security advises on projects. When we note an insecurity, we bring it to the developer's attention and help figure out more secure alternatives. If the developer wishes to push an insecure solution, they need to get management to assume the risk presented.
This process does a few amazing things. The first thing is it makes security a part of everyone's interest - not just the information security department. The developer has to honestly look at the situation and create a strong enough business case to justify the risk to the manager assuming that risk. The management (in CYA mode) is going to look at this business case very closely before accepting it. If the risk is justified, it gets accepted. If not, the developer is forced to seek out a more secure method and security isn't the Bad Guy impeding progress.
The only caveat to this is the company's culture. Accountability and a reasonable understanding of a risk's scope is required. Some insecure, but acceptable, decissions will pass (read: risk management). Its crucial that information security's recommendations are well laid out, understood, and valued by management. Alas, that's not always the case.
The real warning (Score:5)
As Schneier points out at one point in the book, the real problem is commitment to security. The odd high-profile Web site defacement or DB-of-card-numbers theft gets security onto the agenda inside companies, and the edict comes from the business to "guarantee security, whatever the cost" as the usual ill-informed media tsunami breaks all over the Web.
But there is always the assumption that "cost" means a dollar (pound, mark, whatever) figure. It certainly costs money to do it right, but the real cost is in changing working practices, restricting timescales, guaranteeing proper code audits before deployment and so on. These are the costs that the business is rarely -- if ever -- willing to bear for more than a few fleeting moments. The day you walk into the Big New Project meeting and say "We can't deliver this on time because we need to do a security audit" is the day security gets ignored once more.
And the real motto? It's tough to be on the technical end, because even if your advice is ignored, you can bet your head is on the line for the mistakes.