Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Books Media Security Book Reviews

Secrets & Lies: Digital Security In A Networked World 77

Bruce Schneier, well-known security and encryption expert, and author of Applied Cryptography has recently had his newest book published, entitled Secrets & Lies: Digital Security in a Networked World, which explores the world of security as a system. Read the entire review below.
Secrets & Lies: Digital Security in a Networked World
author Bruce Schneier
pages 412
publisher Johy Wiley & Sons, 09/2000
rating 10
reviewer Jeff "hemos" Bates
ISBN 0471253111
summary A well written, well researched exploration of digital security as a system.

*

I've recently had the pleasure of reading Bruce Schneier's latest writing effort Secrets and Lies: Digital Security in a Networked World. A number of our readers may remember his prior book Applied Cryptography , which discussed the use of cryptography in our brave new digital world, and how the use of crytography would make things secure.

This time around, Schneier is much more cicumspect about the uses and application of cryptography. As he states in the introduction and throughout the book, when writing AC, he thought that the use of cryptography would make things more secure. He was correct - but the lesson he learned while working with companies and individuals, that we can't just add cryptography into a system and make it secure, but that systems must be designed from the bottom-up with security in mind. S&L draws upon a huge amount of experience working in the security field, making one central point: Any system, no matter how good the cryptography is, is only as strong as the weakest link. Yes, that's an old cliche, but it's one that bears repeating.

What makes it even more imperative to design system to be secure is the sheer amount of systems that aren't secure, and what the means for us. Some of the examples Schneier uses in S&L are simply frightening to consider were they to occur. And some of his ideas about what will come, and the tools we have will make you want to keep a good stash of gold kruggerands under your mattress.

Indeed, as he talks about in the introduction, part of the reason this book too so long to write was because he was depressed at the world of security around him. Looking at what companies were doing, at what people were doing, and the sheer amount of systems holes out there must be depressing - but it only drives home the point even moreso that we must design *systems* not just adding cryptography and thinking that's the magic pixie dust that can make everything better.

The book does an exceptional job of wending its way through various security measures, how they work, and how they fail. IMHO, one of the real strengths of this book is that it's something that a cryptography novice could read, as well as an expert. Certain sections of the book are dedicated to the nitty gritty behind systems, but there are also sections that are dedicated to simply laying out the process by which one should approach the systems. Indeed, the support blurb on the dust jacket is written by Jay S. Walk, the founder of priceline.com. This adds to the strength of the claim that the book can be for everyone.

Schneier is intimately involved with the security community - besides being the creater of the [Blowfish] and [Twofish] encryption algorithms and a frequent speaker at technical conferences, his company deals with this day in and day out. More to the point for a book, he can also write. It makes reading about Product Testing and Verification (Chapter 22) rather than a snooze, a treat. The book is one of those rare cross-overs - something to give your geek friends, and your [PHB], all of whom will appreciate it. The breadth of the book is revealed in the contents (Duh) and it's a good mixture of all the necessary elements. You'll learn about entropy in a system as well as Attack Trees, Threat Modeling and what all of this stuff means in day-to-day life.

I wholeheartedly recommend this book.

The Table of Contents and the preface are available on Counterpane's site; S&L's Chapter Three is on Amazon.

Purchase this book at ThinkGeek.

This discussion has been archived. No new comments can be posted.

Secrets & Lies: Digital Security in a Networked World

Comments Filter:
  • BWAHAHAH!!!

    20,000 for a review. Cripes - try 30$. And of course I don't post negative reviews - I prefer to promote, rather then demote. If we had to demote, it would mean half of the stories would be book reviews.

  • That's for chapter three of the book. Try clicking on the link.
  • I've been wondering about this for awhile also. I subscribe to his crypto newsletter and it seems the last bunch of issues have either been mostly about his security insurance/monitoring service or this book. I'm all for somebody promoting themselves but I hate when informational things become suspect.
  • Heck, my copy of this JUST shipped from Bookpool. Did Bruce's publisher send you an advanced copy or something? ;-)
  • [kisses goodbye to all hope of having any karma]

    I'm not nearly so impressed with this book, as everyone else seems to be. I have 2 issues:

    • Schneier explains a lot of stuff, like what authentication is, what a private key is and so on, that most geeks will know backwards. I'm willing to bet that anyone that read his Counterpane newsletters will probably not learn a huge amount of new stuff.
    • He sometimes jumps into geek-speak (for example use of grok) which assumes the reader is a geek. I would guess that most people that know what grok means don't need 80% of the explainations in the book.

    I usually love Schneier's stuff, I just think that the market for this book isn't knowledgeable geeks.

    --

  • seriously - the nerve! no one should ever, under any circumstance, create wealth for themself.

  • I see... well I tried.

    "Free your mind and your ass will follow"

  • is "SlashDot for Dummies" !

    .
    .
    Chapter 5 - How to Spot a Troll post
    .
    .
  • So what's the alternative? Live in an environment where everything is so shackled and menacled with security measures that using it just isn't worth the hassle?
  • Somehow I think the kernel is modular enough so that if I load a new PCMCIA module, it wouldn't automatically be given rights to read and write to arbitrary files on the system. Please correct me if I'm wrong, and I'll sleep much less well at night.

    Unfortunatly I belive you are wrong - all code within the (Linux) kernel operates with root priviledges.

    Moreover as these things operate within the kernel you can pull all kinds of tricks to keep them hidden.

    Take a look at this [pulhas.org] link for a nice discussion of this issue.

  • What we need in North America is a law that allows personal users to sue hackers who portscan your computer. Do you really think they'd be as eager to fuck around with your system after that?


    Somehow, I don't think that would deter the danish students who try to break into my company's systems...

  • there's nothing damaging about a portscan.

    and, even if there was, unless you lose a lot of money, the FBI simply isn't interested.

    for example, domain hijacking via Network Solutions crappy email verfication system - due to their weak security (and my not knowing enough to choose the better methods that they do offer but don't promote) i lost my web site for almost a week while NetSol sat on their thumbs. when i called the FBI to see about finding the asshole who stole my domain, they flat out told me that they would not do anything unless the losses exceeded five thousand dollars. this is certainly not a lot of money to a large company, but it's a whole month's worth of business to me.

    so, to put it simply, the FBI isn't interested in people who give you headaches. they will only act when there is sufficient money at stake.

  • If you want the futuristic tea leave reading then perhaps asimov is more your thing ;) Still can't get over the fact that he invented the satelitte.


    Just to be pedantic, that was Arthur C. Clarke. But it's ok, he wrote some good Sci-Fi too.

  • Security is about knowing that once that somebody -DOES- get in, they can do nothing with or to your data that isn't possible by anyone outside the machine, anyway.


    And the best and 100% sure way to achive that, is to have no valuable data at all ! That way, when do steal your worthless data, they will actually have spent more on breaking in that it was worth it.

    Even better, if most of the people would do that, crime would dissapear completely, since there would be incentive for anyone to steal. Of course, people might not be so happy to have nothing that is usefull, but you must sacrifice few valuable things to achive such a great goal.
  • Yeah, I caught that too. Bookpool [bookpool.com] has it for $5 less than ThinkGeek. But maybe that's because they're not paying Hemos a commission ;-)
  • Sucky OCR soft and no review after the scan I guess, really sloppy work...
  • It's bugging me... why are there so many letter a's missing from the Chapter 3 published on Amazon?

    For example "Camilla P rker Bowles", and "ny" instead of "any".

  • Although as a security consultant Mr. Schneier makes some valid points, I think he's not seeing the big picture.

    There's a whole lot more to running a company than systems security. Security is one factor, kind of like an insurance policy. Who cares about the security of a system, if all the potential hackers are scared to death of being sued for every potential intrusion? To use an analogy, why lock your doors when you got a yogurt-fed pitbull waiting on the porch?

    As a matter of fact, I think we should extend that concept to personal security. What we need in North America is a law that allows personal users to sue hackers who portscan your computer. Do you really think they'd be as eager to fuck around with your system after that?

    I think hackers will think twice about trying to break the law by electronically breaking and entering once a few of their teenage pals are sent to jail by the FBI.

  • I notice that this post was moderated to 'funny', though I don't think the poster was trying to make a joke. And while he makes a good point - security isn't the only part of running a company - the rest of it is more of what's -wrong- with the US than a good solution.

    Who cares about being sued? Very few people, actually, unless they're going up against deep pockets corporations who seem to LOVE to sue everyone to get their way. Start a law suit over a port-scan? That's like shooting someone for peeking into your car windows in the Safeway parking lot. And, personally, I seriously doubt some script kiddie in Turkey (to grab a country out of the air) has anything to worry about worse than losing his dialup-account.

    And lock your door. The pitbull likes hamburger...
  • A good point to ponder: Can you make a system social-engineering proof?

    My bank works by identifying you by phone using a challange-response mechanism. The clerk on the phone CANNOT access any information about your account except for your name until he or she enters the correct response to the challange into the computer (which blocks your account after two unsuccessful attempts at that, requiring an alternativwe method).

    Is it possible to go along these lines and plan a system in which the human factor cannot affect security?

    Cling goes my 2c...

  • Sure Slashdot and Thinkgeek have an affiliation, so they will point you there, but reading here just a little bit you'll know that the staff at the Geek Compound does not look kindly on Amazon with their myriad of stupid patents...
    "Ooo look, One Click Patent! We're freakin geniuses. Let's make everyone pay 100 BILLION dollars to use our patents!"
  • I have read "The Code Book" and think it's a great peice of work. It stands proudly (hard cover) on my shelf. The actual "practicals" make it even more enticing to read. The section of quantum-computing for "breaking" codes was recently proven more than probable by the IBM 3-qubit computer.
  • Given how much I enjoyed his previous writings, I expected to be thrilled with this book. There is lots of good stuff, but also some places where he seems to be out of his depth (as would anyone writing a book as broad as this....).

    I haven't finished it yet, but the section on access control was a relatively unhelpful mix of theory and practice with some confusion. E.g.:

    p. 124 "In Unix, ownership is per file, and is usually determined by which directory the file is in."

    The ownership may be statistically correlated with direcory ownership, as in any system, but never "determined" by the direcory.

    On p. 100 it says that a 128-bit symmetric key "will be secure for a millennium". I thought that quantum computers (if we figure out how to make them in the next millennium) would mount a very plausible attack on 128-bit keys, and I assume this is why AES specifies a 256-bit key option.

    On page 74 a distinction is drawn between integrity (accurate copying) and authentication. But the examples on p. 75 demonstrate the very confusion that was highlighted. Kurt Vonnegut's "1997 MIT commencement address" was an authentication problem, not an integrity problem, since the speech was presumably not modified since its creation by Mary Schmich. The same goes for the next two examples on that page.

    On p. 103 it describes most residential locks as having 5 pins, 10 positions per pin => 100,000 possible keys. It would be fun in some medium sized group (about 350 keys, less than 100 people?) to look for a key collision - I think the odds would favor it.... A bit unnerving also.

    An example on p. 109 describes, by way of analogy, physical certified checks as an element in a protocol which helps with selling cars. It makes me wonder how one would really go about verifying that a "certified check" is actually worth anything.

    None of this detracts from the main points of the book. Again, I haven't finished it, but I don't see much warning about ineffective security decrees that just waste people's time. E.g. I often see commercial security organizations mandate time-consuming or inconvenient practices to guard against threats that pale in comparison to other threats which they ignore. This can hurt the security of the bottom line without really improving the security, and without a positive bottom line, the security of the assets can become irrelevant.

    He clearly makes the point that security is a business decision, but some examples of the sort of unnecessary "security" overhead that just wastes time and resources would add a lot.

    --Neal

  • Fatbrain [fatbrain.com] has it for $14.95 plus shipping.
  • Anyone interested in crypto should take a look at the free monthly newsletter Schneier sends out. It's informative, authoritative, well written, opinionated, and interesting, with many links to other current crpto articles etc. http://www.counterpane.com/crypto-gram.html
  • Minor point - Pitbulls are fairly useless guard dogs. It depends on what you want a guard dog to do.
    • A tiny Westhighland White Terrier is going to bark at anything -- they're great if you want a dog to warn you when something strange is happening, but not scare or hurt intruders in any meaningful way,
    • A well trained Collie could probably hurt any person you ask it to hurt -- they're great if you want a dog that will hurt intruders, but possibly not scare intruders very much, and
    • A Pitbull is going to scare the poop out of anyone who sees it -- they're great if you want to deter intruders, but in general they're probably too friendly to actually hurt anyone, and they might not even bark much.
    Pitbulls are a reasonable deterent. As long as you understand you have a deterent (instead of something else), you'll be fine. (Actually, I agree with your sentiments -- people who have been around a lot of dogs are probably not going to be particularily scared of a Pitbull, except inasmuch as the amount of violence in a dog is pretty much a function of training, and the sort of dumbass that trains a dog to be irrationally violent against humans seemed to pick pitbulls five years ago, and rottweilers today. Its really a shame -- both are reasonably decent breeds with unfortunate reputations.)
  • I know the message was flamebate, but....

    I must agree. Ever since they made murder illegal, it has virtually stopped. I routinely walk around harlem at 3am just to watch the would be theives. Theif: "Danm! Shooting him and stealing his wallet is ILLEGAL. Remember when joe got arrested 3 years ago? Don't wanna end up like that!"
    Yup. What this country (any country) needs is more legislation. God only knows how many times I think about speeding, only to remeber my friend who got a ticket. Poor bastard had to pay fifty bucks. Brings tears to my eyes.
    A lot of hackers (bad intentioned, of course) have been hauled off. That is what supports the image. Your typical port scanning script running 3733T h@x0r is a 13 year old 50 lb. geek getting his/her jollies off by doing something illegal, and getting away with it. He/she would probably be smoking a doobie and underage drinking instead, but doesn't get invited to those types of parties.
    You also have pros out there who are blackmailing/embezzeling funds. What they are doing is already illegal. Making it more illegal does nothing. Extra charges in court cases are used as bargainig chips toward the big charge anyway.
    A stupid law will do nothing. It will encourage punk kids, and be a joke to the professional theif.

    Excuse me while my renegade team of bikers cuts the tags off of mattresses.

  • http://slashdot.org/yro/00/08/15/158226.shtml I guess it's a slow news day!
  • This is exactly why this book got written. As Scheier points out: (a) security is not a single line-item product that you buy and forget (your pit-bull defense is vulnerable to any thief with access to drugged hamburger); (b) any single security measure has to be assessed in context (whatever you're protecting better bet worth the liability coverage you'll need to go with your dangerous animal); and (c) no deterence can be counted on to persuade every aggressor (you've got a first-quality guard dog? now that is worth stealing..)

    Having done the honest communication, here's the flamebait: I am really sick of the attitude that every problem can solved with enough force and nastyness. We're notorious in the U.S for our over-reliance on intolerant laws, official and private violence, and other excessive forms of "deterrence." Such measures create mostly a hormone-based illusion of security -- and cause a lot of harm in their own right.

    By the way, starving an animal to make him nasty is illegal in most places -- and morally dubious everywhere.

  • To use an analogy, why lock your doors when you got a yogurt-fed pitbull waiting on the porch?
    Because at some point, we have to collectivly realize that wanting lawyers and the FBI to babysit us can never end well. While I agree that security isn't the answer to everything, neither is legislation.
  • Just reading the intro got me thinking ...

    Instead of aking hacking illegal, maybe laws should be set such that being hacked is illegal. Administrators and software vendors should be made accountable for what they are paid to do.

    such a situation would force industry to take care of themselves instead of being crybabies everytime someone enters their system.

    but i guess it's easier to leave things the way they are ... easier to blame teenagers for problems than the people that set it up.

  • perhaps asimov is more your thing ;) Still can't get over the fact that he invented the satelitte.

    What's more I believe Arthur C. Clarke dreampt up the concept of the geostationary earth orbit communications satellite (two Ls, one T near each end)... OK I work in sat-comms but it seems like no-one can spell the word.

  • Iam wondering why such topics are getting posted ..Is this some advertising for the book ? Lets
    stop with banner ads .........

  • I noticed the link to Amazon on this one.
  • I know what it's linked to, but why Amazon?

    Barnes & Noble carries the same excerpt, and I don't get the impression there is a huge anti-bn sentiment on Slashdot.
  • I think there is a business behind this - at least that's the impression I get from Counterpane's site. [counterpane.com] They provide outsource "monitoring services" that seem like they'd cover the detection and response parts of your security program.
  • Minor point - Pitbulls are fairly useless guard dogs. They are bred to fight other dogs, but are usually very friendly with adult humans. I mean, my pitbull won't even bark at the mailman. If you want a guard dog, get a Mastiff.
  • I am partially through this book and I have to agree that even though the points are valid, it leans towards fear mongering in tone. I don't think this is based on his mood so much as his business. In this slashdot article there about an "article" by Bruce is more about selling his security company than how to handle the problem. [slashdot.org]

  • Smallest said:
    when i called the FBI to see about finding the asshole who stole my domain, they flat out told me that they would not do anything unless the losses exceeded five thousand dollars

    Don't forget that case law now provides that you can calculate the cost of your losses bases on the expense of re-engineering the HTTP protocol from the ground up, just as per the equation used in the Kevin Mitnick case. Your losses do not have to have anything to do with reality. Law is the "res publicas" that applies to all, not just the rich.

  • With college students banding together and being able to crack DES and IDEA, we have a need for stronger encryption, above 128 bits. The U.S will need to change it's policy on strong encryption soon to keep people secure.
  • All systems designed or conceived by the human mind build in the limitation that we humans have to eat food and pass it out in digested form. Think about it. Everything we design has holes in it which are necessary for input and output, because we literally cannot conceive of something which is entirely self-sustaining. Thus, a secure system is simply a _more_ secure system, for there is no way to achieve perfect security given the limits of our own imagination.

    Even AI relies on the innate structure we give it: When we design computers to design computers, they're still built with certain limitations in what is possible for us to conceive. The idea of security is kind of like the idea for perpetual motion--the objective exists in a different realm than the purpose. I can see how the author would become depressed over this, for it seems to be a dilemma. It is actually only a reality which must be understood before approaching the unapproachable, like perfect security. I will be interested to read the book to see how he approaches this dilemma. -Water Paradox

  • Kind of ironic that the page says this, yet it's content says otherwise.

    Shopping with us is 100% safe.
    Guaranteed.
  • As a user who is neither a security expert nor involved (except at the user level) in this sort of work (at the moment) I think that there cannot be enough security.

    Bland statement? Probably so, however this sort of thing is important: I like my private data secure and in an increasing climate of your data being stored on various networks, you should too.

    The problem is that _computer_ security is not enough. Unless staff who work in offices where networked computers are available are similarly vigilant (particularly non-open-plan offices) anyone can just walk in and get access by looking for an unnoccupied office where someone has logged in and gone for lunch, leaving their level of access freely available.

    In the company I worked for during the summer the security ran on what I like to call the 'sea urchin' model (there is probably a formal name of which I am unaware); tough as hell on the outside but gooey once defences are breached.

    Exampl: Okay, you've been changing the layout of one of the databases and you're logged in with root access. You go for lunch but get sidetracked into an inpromptu meeting. Someone with mischeif on their mind walks in and then can do some unpleasant stuff - or gain access to privileged data. Unrealistic? Alas, I think not.

    Elgon
  • sequence_man, you appear (to my eyes) to be as foolishly idealistic as Schneier now says he was when he wrote Applied Cryptography. Where Schneier was moved by his faith in the theoretical perfection of the mathematics, you seem to be standing on the rock of software engineering. This is indeed a dubious foundation. I think you have missed the vast bulk of what Schneier was trying to say with Secrets and Lies. You've focused entirely on the idea that 1) because code is very complex and constantly growing morseso, therefore 2) there will always be bugs, and therefore 3) we can't have security. I think Schneier did make a point somewhat like that, but that's really one of his minor points: a bit of shrubbery in the forest you have overlooked. If I had to summarize the book's main points in one sentence, it would be this: Security in the real world is hard, because it deals with many things, most of them complex, few of them subject to the kinds of precision or rules that you would hope for or expect from things like mathematics or programming languages. Schneier spends several pages (a good part of one chapter, actually) talking about programming, the size and complexity of programs, bugs, and the resulting security loopholes. But this is hardly his main thrust; he simply uses it to underscore what he repeats and emphasizes all along: security is hard, harder than most people think, and must deal with many things, more than most people have thought about. Good software engineering can in fact result in more secure software. Principles such as you alluded to (modular design, isolation of security-related functions, etc) are great, and we need more software designers and writers following such principles. Software should be designed with security in mind and written following good security practices. The shameful fact is that very often it is not. However, even if the world were to wake up tomorrow and begin writing software that properly integrates with system security facilities, checks results for boundary conditions and general sanity, eliminates buffer overruns and race conditions, and the hundred other things you can find in various guides to writing secure code (see David Wheeler's Secure Programming for Linux and Unix HOWTO at http://www.ibiblio.org/pub/Linux/docs/HOWTO/other- formats/html_single/Secure-Programs-HOWT O.html , the best reference on the subject I've found online so far, in part because of his well-commented list of other sources), that still doesn't buy us any real security. Not by itself, at least. You see, security isn't about secure software any more than it's about secure cryptographic protocols. It's about those, and more. Security in the real world has to consider: physical security of computers; strength of passwords; correct installation of secure software; correct use of secure software and secure systems; correct administration of secure software and secure systems; proper permissions on files; power outages; backups; secure backup storage; bugged phones; bugged offices; bribed secretaries; bribed system administrators; bribed CEOs; people with guns; people with bombs; people with rubber hoses and brass knuckles; people with the key to your server room; users that don't understand security; managers and financial backers that don't understand security; system administrators that don't understand security; and finally (although there are many more things I could include in this list) programmers that don't understand security. If you think that any of the above aren't really important, and that all we need is good, solid code that comes from good, solid coding practices, then you are thinking of security in a very limited context. This context may apply to your circumstances, but it is much smaller than the real security needs of a great many people. Granted, most people don't need to worry about everything I mentioned... probably we can knock "men with guns/bombs/rubber hoses" off the list for the majority, right off the bat. But every secure computer system needs to be administered properly or it may well have no security whatsoever. Note that I'm not necessarily saying it needs to be actively administered, in the sense of being constantly monitored by Schneier's new company or some such; simply that a computer system needs to be, at a bare minimum, installed and configured securely at the outset. You may be thinking "Well, duh, that's such an obvious notion that I didn't mention it." But this is precisely the kind of administration that is so lacking in so many computer systems today. If it's not explicitly considered, it will be overlooked, and if it's overlooked, there will be no security. And it gets overlooked all the time, time and time again, often even when someone does take the time to explicitly consider it. Proper system administration (or even simply proper system setup) is only one more thing to consider when evaluating or constructing some kind of secure system. And it is only one of the other things that Schneier discusses in the rest of his book. Schneier discusses and explains much more than poorly written code. He covers pretty much every facet of information security that I've yet encountered, including fundamental concepts like security in depth, risk analysis, threat modeling, and other phrases that Highly Paid Security Consultants like to toss around (but please note that just because the phrases are overused and under-understood by people with too-large hourly rates and too-small reading lists doesn't make the concepts themselves useless). He provides practical insight into things like passwords (what they are good for, what they are not good for, how they can be chosen and stored and used and the limitations of different ways of doing so), cryptographic algorithms (ditto), smart cards (ditto), and various kinds of security software (firewalls, scanners, IDS, and so forth (ditto)). He talks about end-users and how their behavior can compromise security. He talks about similar issues with system administrators, bureaucrats, police, and criminals. Anyone contemplating buying or reading Secrets and Lies should be aware of a few things. First, it is not a hard-core technical book. There's no code, no algorithms, no configuration step-by-steps... there's not even any math! It's not written for a technical audience. Actually, that's not quite right: it's written to be accessible to a non-technical audience; there's a difference. Speaking as a techie, I eagerly devoured the book and was left quite satisfied. It covers my favorite subject (computer security) broadly and thoroughly even while omitting details that are only of interest to someone doing implementation (writing code, configuring systems, etc). It's a book that you could give to management if they wanted to know more about this "security stuff" you keep going on about. If they weren't inclined to read it, reading it yourself would make a good preparation for giving a presentation to management. While non-technical, it is not dumbed down in any respect. Second, if (like me) you've read almost everything Schneier has written on his website, or have been working in or studying computer security for a couple of years, you won't really learn anything from reading the book. There's nothing ground-breaking, nothing revolutionary. But it is an excellent compliation and presentation of the things we all know, or should know. It may help you gain some focus on your current security problems, or put a large security project in perspective, or inspire you to do something specific you weren't considering before. If nothing else, Schneier's writing style is enough to make the book an entertaining read. Third, you might get the impression that Schneier is writing this book simply to get customers for his new security monitoring business. I'm going to suggest that both his book and his business spring from the same source: his interest, research, and work in the field of both cryptography and, more and more as time has passed, security in general. Schneier is certainly qualified to write a book like this, and the book stands on its own as both informative and (for us computer security wonks) entertaining. If you are concerned that the book is nothing more than a protracted pitch for his new company, simply tear out the last few pages of the last chapter of the book, as that is the only place he mentions it. You might legitimately be worried that with his new business, Schneier has an interest in overstating the security risks of computer systems. That may be so. But I don't believe that any of the risks he discusses are overstated in the least. In fact, several times he talks about the importance of not overstating security risks. Proper security stances, he argues (and I agree), result from understanding what threats you are actually facing, the importance of what you are protecting, the expense (in time, effort, and money) you are willing to incurr in protecting it, and the loss you are willing to take in failing to protect it. Schneier is firmly on the side of reason, not hysteria, and makes that clear many times throughout the book. If you are interested in learning more about computer security, I would wholeheartedly recommend Secrets and Lies for foundation and philosophy, along with Practical Unix and Internet Security by Garfinkle and Spafford for practical advice and instructions. But I would urge you to read more than just the chapter about programs and bugs in code, lest you end up thinking, like sequence_man, that "security is a solvable software engineering problem". If you truly believe that, you will end up writing code in isolation with no consideration of other aspects of security. You will write code that has no practical use and makes no meaningful contribution to security in the real world. Eddie Maise
  • Argh.... Please forgive the above premature post. This is my first post to Slashdot; I meant to hit Preview, but missed.

    sequence_man, you appear (to my eyes) to be as foolishly idealistic as Schneier now says he was when he wrote Applied Cryptography. Where Schneier was moved by his faith in the theoretical perfection of the mathematics, you seem to be standing on the rock of software engineering. This is indeed a dubious foundation.

    I think you have missed the vast bulk of what Schneier was trying to say with Secrets and Lies. You've focused entirely on the idea that 1) because code is very complex and constantly growing morseso, therefore 2) there will always be bugs, and therefore 3) we can't have security. I think Schneier did make a point somewhat like that, but that's really one of his minor points: a bit of shrubbery in the forest you have overlooked.

    If I had to summarize the book's main points in one sentence, it would be this: Security in the real world is hard, because it deals with many things, most of them complex, few of them subject to the kinds of precision or rules that you would hope for or expect from things like mathematics or programming languages.

    Schneier spends several pages (a good part of one chapter, actually) talking about programming, the size and complexity of programs, bugs, and the resulting security loopholes. But this is hardly his main thrust; he simply uses it to underscore what he repeats and emphasizes all along: security is hard, harder than most people think, and must deal with many things, more than most people have thought about.

    Good software engineering can in fact result in more secure software. Principles such as you alluded to (modular design, isolation of security-related functions, etc) are great, and we need more software designers and writers following such principles. Software should be designed with security in mind and written following good security practices. The shameful fact is that very often it is not.

    However, even if the world were to wake up tomorrow and begin writing software that properly integrates with system security facilities, checks results for boundary conditions and general sanity, eliminates buffer overruns and race conditions, and the hundred other things you can find in various guides to writing secure code (see David Wheeler's Secure Programming for Linux and Unix HOWTO at http://www.ibiblio.org/pub/Linux/docs/HOWTO/other- formats/html_single/Secure-P rograms-HOWTO.html [ibiblio.org] , the best reference on the subject I've found online so far, in part because of his well-commented list of other sources), that still doesn't buy us any real security. Not by itself, at least.

    You see, security isn't about secure software any more than it's about secure cryptographic protocols. It's about those, and more. Security in the real world has to consider the following: physical security of computers; strength of passwords; correct installation of secure software; correct use of secure software and secure systems; correct administration of secure software and secure systems; proper permissions on files; power outages; backups; secure backup storage; bugged phones; bugged offices; bribed secretaries; bribed system administrators; bribed CEOs; people with guns; people with bombs; people with rubber hoses and brass knuckles; people with the key to your server room; users that don't understand security; managers and financial backers that don't understand security; system administrators that don't understand security; and finally (although there are many more things I could include in this list) programmers that don't understand security.

    If you think that any of the above aren't really important, and that all we need is good, solid code that comes from good, solid coding practices, then you are thinking of security in a very limited context. This context may apply to your circumstances, but it is much smaller than the real security needs of a great many people.

    Granted, most people don't need to worry about everything I mentioned... probably we can knock "men with guns/bombs/rubber hoses" off the list for the majority, right off the bat. But every secure computer system needs to be administered properly or it may well have no security whatsoever. Note that I'm not necessarily saying it needs to be actively administered, in the sense of being constantly monitored by Schneier's new company or some such; simply that a computer system needs to be, at a bare minimum, installed and configured securely at the outset. You may be thinking: "Well, duh, that's such an obvious notion that I didn't mention it." But this is precisely the kind of administration that is so lacking in so many computer systems today. If it's not explicitly considered, it will be overlooked, and if it's overlooked, there will be no security. And it gets overlooked all the time, time and time again, often even when someone does take the time to explicitly consider it.

    Proper system administration (or even simply proper system setup) is only one more thing to consider when evaluating or constructing some kind of secure system. And it is only one of the other things that Schneier discusses in the rest of his book.

    Schneier discusses and explains much more than poorly written code. He covers pretty much every facet of information security that I've yet encountered, including fundamental concepts like security in depth, risk analysis, threat modeling, and other phrases that Highly Paid Security Consultants like to toss around (but please note that just because the phrases are overused and under-understood by people with too-large hourly rates and too-small reading lists doesn't make the concepts themselves useless). He provides practical insight into things like passwords (what they are good for, what they are not good for, how they can be chosen and stored and used and the limitations of different ways of doing so), cryptographic algorithms (ditto), smart cards (ditto), and various kinds of security software (firewalls, scanners, IDS, and so forth (ditto)). He talks about end-users and how their behavior can compromise security. He talks about similar issues with system administrators, bureaucrats, police, and criminals.

    Anyone contemplating buying or reading Secrets and Lies should be aware of a few things. First, it is not a hard-core technical book. There's no code, no algorithms, no configuration step-by-steps... there's not even any math! It's not written for a technical audience. Actually, that's not quite right: it's written to be accessible to a non-technical audience; there's a difference. Speaking as a techie, I eagerly devoured the book and was left quite satisfied. It covers my favorite subject (computer security) broadly and thoroughly even while omitting details that are only of interest to someone doing implementation (writing code, configuring systems, etc). It's a book that you could give to management if they wanted to know more about this "security stuff" you keep going on about. If they weren't inclined to read it, reading it yourself would make a good preparation for giving a presentation to management. While non-technical, it is not dumbed down in any respect.

    Second, if (like me) you've read almost everything Schneier has written on his website, or have been working in or studying computer security for a couple of years, you won't really learn anything from reading the book. There's nothing ground-breaking, nothing revolutionary. But it is an excellent compliation and presentation of the things we all know, or should know. It may help you gain some focus on your current security problems, or put a large security project in perspective, or inspire you to do something specific you weren't considering before. If nothing else, Schneier's writing style is enough to make the book an entertaining read.

    Third, you might get the impression that Schneier is writing this book simply to get customers for his new security monitoring business. I'm going to suggest that both his book and his business spring from the same source: his interest, research, and work in the field of both cryptography and, more and more as time has passed, security in general. Schneier is certainly qualified to write a book like this, and the book stands on its own as both informative and (for us computer security wonks) entertaining. If you are concerned that the book is nothing more than a protracted pitch for his new company, simply tear out the last few pages of the last chapter of the book, as that is the only place he mentions it.

    You might legitimately be worried that with his new business, Schneier has an interest in overstating the security risks of computer systems. That may be so. But I don't believe that any of the risks he discusses are overstated in the least. In fact, several times he talks about the importance of not overstating security risks. Proper security stances, he argues (and I agree), result from understanding what threats you are actually facing, the importance of what you are protecting, the expense (in time, effort, and money) you are willing to incurr in protecting it, and the loss you are willing to take in failing to protect it. Schneier is firmly on the side of reason, not hysteria, and makes that clear many times throughout the book.

    If you are interested in learning more about computer security, I would wholeheartedly recommend Secrets and Lies for foundation and philosophy, along with Practical Unix and Internet Security by Garfinkle and Spafford for practical advice and instructions. But I would urge you to read more than just the chapter about programs and bugs in code, lest you end up thinking, like sequence_man, that "security is a solvable software engineering problem". If you truly believe that, you will end up writing code in isolation with no consideration of other aspects of security. You will write code that has no practical use and makes no meaningful contribution to security in the real world.

    Eddie Maise
  • Apparently, from the Amazon chapter 3 extract:

    Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife s teeth. A good hacker would have counted his wife s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.

    Who knows hackers with wives? Or girlfriends?

  • The big reason I've been telling people I work with to read this book is that Schneier makes the point over and over that security isn't some sort of checkbox on a product sheet. Good designs can be compromised by bad implementations. If your employees don't understand security, no amount of software will help. The prosecution in Mitchnik case painted him as some sort of dark lord of technology but most of his successes came from social engineering simply because it was so much easier than a complex technical attack.

    This is old news and most people with an active interest in security are yawning by that point. Secrets & Lies isn't aimed at security professionals - it's aimed at everyone else using the Internet. Most people don't know that terms like "128-bit encryption", "SSL" or "firewall" mean absolutely nothing on their own. This book does a wonderful job of debunking a number of security myths.

    I'm strongly opposed to the idea of requiring any sort license to use the internet. I still found myself thinking that one redeeming value of a licensed Internet would that people could be required to read this book. Most people are entirely too cavalier about security, largely because they don't know any better.

  • we literally cannot conceive of something which is entirely self-sustaining

    hmmm... What about those blown glass spheres that contain a complete balanced biosphere? They've got like little shrimp and plants and water and air in them... no holes.

    "Free your mind and your ass will follow"

  • You don't think this is already happening? See, e.g., Kevin Mitnick.

    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • I wonder how much flak this and the previous article [slashdot.org] will get from people accusing Bruce Schneier of selling out and becoming a marketdroid. However, to me his "conversion" sounds sincere - a real crisis of conscience, leading to depression and then a breakthrough. (Believe me, I know from depression....) Besides, why shouldn't he literally his money where is mouth is, and try to run this "managed security" idea as a business? If he's wrong, then his clients will suffer, but at least his ideas will be tested in the real world, rather than just debated endlessly in academic circles.

    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • So I suppose if I catch you touching the handle of my car door as you pass by it on the street, we should lock you up on felony charges?

    I don't get Americans like you. First you run around screaming "theyerawtabealaw" every time you perceive a problem, and next thing you know, you're screaming about the gubermint on your back.

    Real democracy is hard, and it means spending a lot of time working for solutions outside the system, not radically curtailing freedom. Go live in a police state if you don't like it, but don't ruin my country.

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.

  • Covered on pages 266 - 269 under the chapter "The Human Factor". From the beginning of the chapter:
    People often represent the weakest link in the security chain and are chronically responsible for the failure of security systems.
    Bruce Schneier offers a fairly comprehensive look at information security today.
  • I resent that comment. I'm Canadian, not USian.
  • The point Schneier makes is a valid one, and his transition matches
    one made by a lot of people over the last few years: the mathematics
    is exciting, showing the existence of secure encryption and security
    protocols. After having digested that, you realise the system
    engineering is depressing: modern computer systems and actually
    depolyed protocols are highly complex and general, and it is normally
    impossible to be confident that a system behaves according to a
    protocol.
  • You make the very assumption that you say is a bad one -- you trust that the algorithms you are employing are secure, and more, that the implementations you are using them with are secure.

    On top of that, if somebody can always get in, they can always hunt you down and apply rubber hose cryptanalysis. They can do that even without breaking into your machine, and there's not much you can do if an attacker is that determined.

    At some point, it comes down to actually having to trust something. Do you trust that your client has not been compromised with a program that will pass any cleartext it sees on to your competitor (or the Mafia, or the government)? Do you trust that your network will deliver a reasonable fraction of the packets you send out? Do you trust that your encryption and authentication are difficult to defeat?

    This isn't very surprising, either; you just usually don't think much about trust issues in the real world, because anonymity in the real world is much harder to attain. Do you trust the guy across the street not to pull out an Uzi and try to mow you down? Most people do, because there are generally compelling deterrences against that (based primarily on the ease of being identified and the difficulty of leaving the scene without leaving indentifiable traces). Some people don't, so they either stay inside or only go out in the company of bodyguards. It's a decision that must be made on a case-by-case basis, even if you don't often think about making that choice.

    So one of the real tricks in the online world is finding the right balance between anonymity and identification -- some balance point that protects both the victims and the victimized. This is not the only tricky thing about online security; actual bug-free (or security bug-free) code is at least as important, and the code cannot have any more security than the protocol.

    Most of the IP-based protocols used in the world today ignore all of these in favor of features and glitz, because they're driven by commercial decision processes, but there's always hope that newer protocols will be adopted and that they will supplant the insecure protocols that come out first. (For more on that, I refer the reader to the classic "Worse Is Better" paper.)
  • I got a great deal of knowledge out of the book, and even more valuable was the fresh perspective that it offers. For example, I never considered "secure server" (SSL) to mean what it implies. I was happy to learn I was right, and that my cynicism (experienced viewpoint??) was justified. But wait.... there's hope... it turns out that the fiscal security you want is guaranteed by the Credit Card companies, who limit the losses, regardless of the swiss cheese nature of Internet security today. (Never use a Debit card online though, losses aren't limited)

    This is just one of the many revelations and insights Bruce has to offer in this very well written book. I learned about threat trees, the true nature of the security landscape, ... and so much more it's amazing.

    Buy it, read it, twice. (I'm about to start again)

    --Mike--

  • Bruce spoke on a couple of panels at Chicon [chicon.org]. I hope he'll forgive me, but I am reconstructing this from somewhat sleep deprived memories. He was quite frank about the fact that when he wrote Applied Cryptography, he believed that the proper use of cryptography could provide security. He said that his actual observations since then have convinced him that it is not possible for humans to use a system and for it to remain completely secure. The limits on human memory for pass phrases and the need for access to the secured data are two of the biggest problems. Although I don't remember him saying it outright, another is the limits to individuals' ability to stay up-to-date is another.
  • Here is my rather long (and negative) review of Schneier's book. For those unwilling to wade through it, my key point is that being a good mathimatician doesn't necessarilly qualify one to be a good programmer. He truely doesn't understand programming and hence doesn't believe that a single secure piece of software could ever be written.

    After writing a wonderful book on Applied Cryptography, Bruce Schneier lost his faith in mathematics. This loss of faith came from looking at truly applied cryptography, namely looking at actual source code. This code so scared him that he wrote a book saying that cryptography is not The Answer(tm). I beg to differ.

    He thinks real code should scare you so much that you should hire his company on continuously monitor your computer. Not a onetime vetting -- you should pay him every day for the rest of eternity. Not a bad racket.

    The key points he makes are as follows:

    • There are about 5 - 15 bugs per 1000 lines of code.
    • Software has been doubling in complexity and size every year or two.
    • Windows NT has 35 - 60 million lines of code and hence about 100k bugs.
    • Therefore no modern product will every actually be secure.
    Modern software will generate new bugs at a faster rate than even the "many-eyeballs" of open source can squash them.

    But code doesn't have to be written this way. You can put all of your risky code in a small enough package that it could be checked for errors. The word kernel comes to mind. :-) But, he also says that Linux suffers the same problem that MS has. Unfortunately, I don't know the kernel well enough to comment on how much it has grown and he doesn't provide data on the growth of Unix kernels. Somehow I think this absence of information might reflect the fact that the current Linux kernel is not 1000 times bigger than say a Solaris kernel of the early 80s.

    But under Linux, is even counting the lines of code a good measure? Somehow I think the kernel is modular enough so that if I load a new PCMCIA module, it wouldn't automatically be given rights to read and write to arbitrary files on the system. Please correct me if I'm wrong, and I'll sleep much less well at night. So not all the code in the kernel should be counted as being the same.

    I would be much happier with his analysis if he had looked at contrasts between sendmail (which is notoriously buggy) and qmail (which doesn't appear to be as buggy). The software point is that the dangerous code in qmail is all in one program--and that program doesn't trust any of the other pieces that make up qmail. So if that part is actually programmed bug-free, then there shouldn't be ANY possible bug in the rest of the code that can undermine security. This is good design. Almost any line of the sendmail program could undermine the security of the system. This is a truly a monolithically bad design.

    To list another example, almost all of current open source pgp (namely gpg and its supporting material) uses gpg for the actual encryption and some other program for the viewing. So no matter how stupid the viewing program is, it is impossible for it to undermine security. (OK, it could send a copy of the plain text after it sends a copy of the encrypted message--but that would be a easy bug to catch.)

    Even viruses like the ILOVEYOU worm in a hopelessly insecure operating system should be fairly easy to avoid. If you simply had Visual Basic always run in something equivalent to a change-rooted environment, it would have been impossible to write such a virus. Whether this could be done in windoz isn't the issue--instead the point is that people have known about this sort of problem for years and there has been a simple fix for years.

    In his defense, he does seem to spend most of his time working in the MS world. That he worries about someone running a game server on their machine without having vetted all of the code would be a very rational worry in MS. But, there are ways this could be done under Linux that would maintain total security of the machine it ran on without looking at even one line of source code (run the program as a regular user in a chroot environment sounds safe to me).

    So I see the picture something like this.

    • Cryptography is not a solution to all problems:
      • Digital cash probably won't catch on.
      • Smart cards probably won't ever work.
      • All the fancy algorithms for voting and sharing information will never replace the voting booth.
      • Playing poker will probably use a trusted intermediary instead of a cryptographic protocol.
    • Cryptography can solve some things: SSH, GPG and VPNs all work.
    • But, the key reason a system will be cracked into is not new mathematics but bad coding.
    • There are ways of coding (see Lakos for many good ideas) that will lead to secure programs.
    • Users will always be a weak link, but a good system (say Linux) should only compromise what that user had access to and not the whole network.
    Conclusion

    Schneier is incorrect when he says that security is a process. Instead, security is a solvable software engineering problem. In fact, I think a few small pieces of it have actually been solved. I think mail handling has been solved (qmail) and telnet has been solved (ssh2 with public/private keys). Certainly serving static web pages is solved (apache).

    Keeping users from getting the root access should be a solvable problem, but I don't know if it is currently solved or not on Linux systems. Once that is solved, serving CGI scripts, running arbitrary servers, downloading arbitrary code off the net and running it on your local machine should all be safe things to do. (Now I don't think the automatic updates to GNOME is going to pass security muster anytime soon.)

    So let me make a statement that most clearly separates Schneier's position from my own. Consider the following two systems:

    • A Linux system running ssh2 and qmail that is never patched. Total passive management. (Of course the ssh2 would require public key/private key pairs instead of passwords.)
    • An NT system (or whatever is the latest and greatest MS product) that has an active administrator who installs all MS patches within 24 hours of their release and upgrades to the newest version whenever it comes out.
    Which do you think has a higher chance of being secure over the next few years? Schneier argues that active management is the only way of providing reasonable security, so my strawman version of him would pick the NT system. I think the Linux machine would probably be safe for 10 years. (I'd go longer, but don't trust the key length of ssh2 to protect against all new mathematics and hardware past about 10 years.)

    If you went with NT, read Schneier's book. He will give you good arguments to believe that active management is the only answer. If you went with a limited Linux system, then join the open software movement and see if we can add more features to the Linux box without compromising security.

  • I think you mis-understand me. My point is that too many people are fooled into thinking that we can create the perfect, secure environment. As it stands right now, we are usually one step behind the hackers and crackers. How often do you see MS or Sun or Redhat or any other manufacturer come out on their own and say, here's an security patch to a problem we just found? It doesn't happen.

    Quite opposite of what you thought I meant, I think that the first step to improving digital security is to become more proactive than reactive to security.

    The point my above post was trying to convey is that we're currently content to say I've secured my system against all know attacks. So what? Sometime down the road their will be a new hack out there and your system is going to be vulnerable. Too often, I've seen people like this. Security is an ongoing process that will never be complete.

    I never said we shouldn't try to make better security. I just meant to imply that we will always find a way around security. Take DeCSS for example. The MPAA convinced all the movie studies that CSS was uncrackable, that's why they adapted it as the standard for DVD. Now look at it. The MPAA is spending billions in a losing battle to prevent people from decrypting it.

    Never get sucked into believing that security makes your system secure. There will always be talented people out there putting in the effort to prove you wrong.

    kwsNI

  • we literally cannot conceive of something which is entirely self-sustaining

    hmmm... What about those blown glass spheres that contain a complete balanced biosphere?

    Bzzztt. While these glass things have closed mass, they are not Thermodymanically closed -- energy (eg. sunlight) freely moves back and forth. Your example is not self-sustaining.

  • I just finished this one too. First the obvious positive comments: Very readable, usually very clear, very broad scope. I think every issue that a security manager needs to know about is at least mentioned, with the really important issues discussed at length. Schneier tries (and usually succeeds) in writing for a general audience without dumbing down the important stuff. Mandatory reading if you have any interest in security.

    That being said, there are some nits I have to pick. I disagree that the book is "well researched." The important stuff is all obviously drawn from his own experience (obviously extensive) as a security consultant, supplemented with various anecdotes from the web, journals, etc. A useful knowledge base, but not that impressive researchwise.

    This is aggravated by Schneier's use of non-technical examples and analogies in many of his arguments. The arguments themselves are very strong, but when he cites this historical example or that financial practice, he often gets his facts wrong. I don't suppose this has a big effect on his credibility, but it must have some.

    It's also a little disappointing that Schneier didn't bother to get into the general history of the Engima/Ultra business -- a prime example of his basic theme, that the smallest failure of the security process is vulnerable to machines with infinite patience.

    Finally, I'm very, very disappointed that Scheier fails to challenge -- and sometimes even supports -- the social conservative attitude towards hacking and reverse engineering. He points out the futility of trying to encrypt DVDs -- but barely touches on the DMCA. He speaks of general software hacking as a basically benign activity -- but he supports criminal punishment even for the most non-invasive electronic "trespass". This is a point of view utterly at odds with his ideas of security considered in a complete social context.

  • Bruce seems to be so disillusioned in this book - it reminds me of the tones of Data Smog.

    Yes, it is impossible to secure a system completely - every security system has a human element, and every human element has a head that can have a gun held against it.

    Its sort of amusing that Bruce took this long to come to this realization.

    Don't mind the doomsayers - if you follow sound security policies and practices, you will likely be okay. If not, well, that's why you buy insurance.

  • I still recommend The Code Book [amazon.com] for more general reading. (Normal Amazon link - none of that affiliation crap).
  • Gee whiz, Hemos. Did you actually have to break out the <font size=+1> so we'd notice the ThinkGeek link?
  • I just read this book last week.
    I am not a security consultant, nor am I an uber cracker. I am a programmer who due to middle management mismanagement is often forced to worry about network security a lot more than I should, but I digress.
    I found a lot of the major points were identical to many things I learnt at university and was dissappointed by the scare mongering that the author employed. I would assume that anyone who reads a book regarding advanced security policy implementations would be quite aware of all the shortcomings in a lot of organisations systems and thereby doesn't really need to be bombarded with anecdotal nightmare scenarios.

    I am disappointed by some of the sensationalism which is being used by a lot of "security" books.
    There is a new genre of books coming out, pop-tech. Books which are half based on sound technical information (albeit derived from academic channels) and popular culture semi-futurism/apocalyptic tea leave reading.

    Whilst this book is on the softer end of that spectrum it still falls into the same band.
    Having said that, a lot of good points are raised and it will jog your memory, but if you want a sounder perspective on security and encryption I suggest you go to your local university bookstore and get a book which was written by someone who doesn't run a company in the field and doesn't need to drum up business.

    If you want the futuristic tea leave reading then perhaps asimov is more your thing ;) Still can't get over the fact that he invented the satelitte.

    This book is somewhat of a hybrid (as mentioned before it is by no means the worst offender but it is definately in the same cell block) please don't try to develop security policies for you organsation solely on this book. As the tea leaves will end up telling the truth.
  • by B. Samedi ( 48894 ) on Tuesday September 19, 2000 @08:57AM (#768618)
    Schneier explains a lot of stuff, like what authentication is, what a private key is and so on, that most geeks will know backwards. I'm willing to bet that anyone that read his Counterpane newsletters will probably not learn a huge amount of new stuff.

    Why do you assume most geeks know this kind of thing well? I know several geeks and all of them know little about cryptology. As long as it works they're happy and they continue on with what ever they're pet project is. It never hurts to review the material.

    As for grok I wouldn't consider that just a geek word. My parents know it (sure they don't use it but they know it) and they do not read science fiction at all. Sometimes you would be suprised at what you think is a word or concept non-geeks don't get that they do understand.

  • by Hard_Code ( 49548 ) on Tuesday September 19, 2000 @07:38AM (#768619)
    Yes, cryptography is great, but until you can encrypt human beings you will not be able to construct the wonderful theoretically impervious systems thought up. Security will always be a set of tradeoffs, and never really bulletproof...security has to be thought of holistically as a system, environment, ecology.
  • by warpSpeed ( 67927 ) <slashdot@fredcom.com> on Tuesday September 19, 2000 @07:10AM (#768620) Homepage Journal
    The book is way cheap at buy.com (under $15US) + 4 for shipping. Unless they were targeting discount cookies at me. That beats the hell out of Amazon and ThinkGeek.

    ~Sean

  • by kwsNI ( 133721 ) on Tuesday September 19, 2000 @08:54AM (#768621) Homepage
    There will never be complete security in a networked world. The very nature of networking is to allow people access to information and as soon as you give them that, there will always be some way for someone with enough determination to get more than they're supposed to.

    The only sure-fire way to make a system secure is to remove every input (KBD, Mouse, Serial, Network, Parallel, USB, etc.) port from a system.

    kwsNI

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday September 19, 2000 @07:02AM (#768622) Homepage Journal
    Deterence is a fairly useless form of defence.

    It's biggest success has been in that Russia and the US haven't nuked themselves to oblivion and back. Yet.

    However, it's failed abysmally in just about every other sector of life. Tough jail sentances, guns, the death penalty, etc, have usually attracted more crime because they delude people into feeling safer than they are (thus reducing any REAL defence) and increasing potential rewards (due to rewards often being proportional to risk).

    No. The best defence is NOT deterence. Even a pitbull can be distracted, fed, drugged, trapped, etc. The best defence is to stop making pointless assumptions.

    In computing, true security comes when you can say (HONESTLY) that your server box is untrusted, your client box is untrusted, your network is untrusted and you can STILL store and move data securely.

    Security isn't about stopping someone getting in. Someone can ALWAYS get in, if they try hard enough. Security is about knowing that once that somebody -DOES- get in, they can do nothing with or to your data that isn't possible by anyone outside the machine, anyway.

    A locked door is no guard, and an alarm can always be bypassed. If you put your valuables in plain sight and easy reach, the best deterence in the world can only buy you time, not security.

  • by FallLine ( 12211 ) on Tuesday September 19, 2000 @07:54AM (#768623)
    BEGIN RANT/

    Though I believe there is a lot of truth to that statement, I've also seen it applied to an extent that it hurts overall security. Generally speaking, this world is not nearly so simple. Where systems break, it generally involves a failure on multiple levels. For instance, look at the numerous social engineering scams. Rarely is it just ONE person that broke the entire security, but rather a bunch of different people within the target organization being too careless with information. All those careless bits, in turn, interplayed with one another and allowed the crackers to build up the key to access the desired information. The point is that if each person were just twice as aware as they were before, that could go a long long ways in preventing hackings. The same goes for the vast majority of the hacking incidents. Rarely are they some strike of genius on the part of the hacker, seeing things that no one has seen before. Instead they're previously documented things that could've been avoided with reasonable effort.

    I sincerely believe that effective security is attainable, provided enough effort is put into it, even though one may never be 100% theoretically secure. That is to say that if all the key players involved simply payed more attention to security, actual instances of hacking secured sites would be rare. Let's say we have two major layers of security. Each layer had 50 trained professionals go over it for all known bugs, and for anything theoretical they can provide. Assuming the organization keeps up to date on emerging threats, and monitors its security system, and if the source and specific specs on the protocol are closed, the odds of a hacker DISCOVERING two bugs in both layers that none of the pros saw is quite slim [about as good as a gaurantee as you can get in life anyways].

    Unfortunately, the only standard that most people have to compare it with is nothing even approaching that. For instance, almost every single operating systems (yes, there are one or two exceptions), including the linux distros, have shipped with well known exploitable bugs. They may not have known there is that specific bug in that specific package/module/whatever, but if they had really double checked for existing bugs, it'd never have shipped like that. Likewise for most of these hacking incidents. They're well documented techniques simply being reapplied. There is simply no excuse for it. One may not be able to gaurantee that no bugs exist, but they can certainly gaurantee that certain conditions don't exist.

    /END RANT
  • by chancycat ( 104884 ) on Tuesday September 19, 2000 @06:38AM (#768624) Journal
    Also recommend: Information Warfare and Security by Denning. Slightly different topic and a bit aging, but full of information and perspective.

  • by jjr ( 6873 ) on Tuesday September 19, 2000 @06:50AM (#768625) Homepage
    Sometime the best away to around security is the people who secure the system. I have gotten password with out any kind of verfication. You cannot your system without training your people first on how to secure your system.
  • by Azog ( 20907 ) on Tuesday September 19, 2000 @07:06AM (#768626) Homepage
    I was very impressed. It's interesting reading, not extremely technical, and has lots of good tips on how to think about building secure systems. There's not much detail on any particular system - that's not the focus of the book. Since it focuses more on concepts and less on specifics, it will remain relevant for a long time, compared to "Securing Red Hat Linux 6.0" which is already out of date.

    An interesting thing about the book is the contrast between two concepts that may seem contradictory at first: Security is only as strong as it's weakest link, but layered security can result in stronger security than any one part.

    It's like the difference between logical AND and logical OR. You want to build security systems where to break it, an attacker would have to break through a firewall AND steal a password AND get root access from user access AND evade the network monitoring system, etc. This is security in depth - stronger than any one link.

    Unfortunately, most systems are designed as logical OR: To break the security, the attacker just needs to penetrate the firewall OR steal a password OR buffer-overflow a CGI script, etc. This is "weakest link" security.

    Other things that stuck in my mind from the first reading: No matter how strong you build it, someone will eventually break it. So design it for easy recovery. The CSS system on DVD's is an excellent example for this - now that it's been broken, there is no good way for the DVD manufacturers to recover. They can't change the encryption system without breaking compatibility... (The CSS/deCSS system is actually used as an example several times throughout the book).

    I highly recommend this book. I'll probably reread my copy several times.


    Torrey Hoffman (Azog)
  • by McMuffin Man ( 21896 ) on Tuesday September 19, 2000 @08:23AM (#768627)
    It's probably worth pointing out that when Schneier wrote Applied Cryptography, which pushed crypto as the solution to security problems, he was making his living as a crypto consultant. Now he publishes Secrets & Lies, which says there is no security and the only comfort is in eternal vigilance, he's making his living running a company that sells monitoring services.

    Personally, I think Bruce is an upright guy, and that what's happening here is that at each point in time he both writes about and works to address the problem he actually believes in. But it always pays to pay attention to the potential biases of your sources.
  • by _Sprocket_ ( 42527 ) on Tuesday September 19, 2000 @08:56AM (#768628)
    The day you walk into the Big New Project meeting and say "We can't deliver this on time because we need to do a security audit" is the day security gets ignored once more.

    And the real motto? It's tough to be on the technical end, because even if your advice is ignored, you can bet your head is on the line for the mistakes.

    I work at one of the few big corporations that really seem to "get" information security. Granted, it took some major security incidents to bring about this change in mindset - but its happened. Today, the security department has teeth. Do we butt up against production schedules? Constantly. Do we take some flak for it? Sure. You've got to have a thick skin.

    Of course, it takes more than a thick skin to deal with this environment. You're diametrically opposing developers - you want things secure, they want things to work (the inverse relationship of functionality vs. security). The trap here is becoming this big obstical that developers have to figure a way around to get "real work" done. We avoid that.

    In our environment, information security advises on projects. When we note an insecurity, we bring it to the developer's attention and help figure out more secure alternatives. If the developer wishes to push an insecure solution, they need to get management to assume the risk presented.

    This process does a few amazing things. The first thing is it makes security a part of everyone's interest - not just the information security department. The developer has to honestly look at the situation and create a strong enough business case to justify the risk to the manager assuming that risk. The management (in CYA mode) is going to look at this business case very closely before accepting it. If the risk is justified, it gets accepted. If not, the developer is forced to seek out a more secure method and security isn't the Bad Guy impeding progress.

    The only caveat to this is the company's culture. Accountability and a reasonable understanding of a risk's scope is required. Some insecure, but acceptable, decissions will pass (read: risk management). Its crucial that information security's recommendations are well laid out, understood, and valued by management. Alas, that's not always the case.

  • by 1984 ( 56406 ) on Tuesday September 19, 2000 @06:55AM (#768629)
    As other posters have pointed out, there's much more to a company than just systems security.

    As Schneier points out at one point in the book, the real problem is commitment to security. The odd high-profile Web site defacement or DB-of-card-numbers theft gets security onto the agenda inside companies, and the edict comes from the business to "guarantee security, whatever the cost" as the usual ill-informed media tsunami breaks all over the Web.

    But there is always the assumption that "cost" means a dollar (pound, mark, whatever) figure. It certainly costs money to do it right, but the real cost is in changing working practices, restricting timescales, guaranteeing proper code audits before deployment and so on. These are the costs that the business is rarely -- if ever -- willing to bear for more than a few fleeting moments. The day you walk into the Big New Project meeting and say "We can't deliver this on time because we need to do a security audit" is the day security gets ignored once more.

    And the real motto? It's tough to be on the technical end, because even if your advice is ignored, you can bet your head is on the line for the mistakes.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...