Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Books Media Bug Programming Book Reviews IT Technology

Exploiting Software 148

prostoalex writes "Why are networked computing environments so insecure? You've heard the story before - early computers were not designed to work in the network environment, and even most software written later was designed to work on benevolent networks. As Bruce Schneier says in the preface to Building Secure Software: How to Break Code, 'We wouldn't have to spend so much time, money and effort on network security if we didn't have such bad software security.'" Read on for prostoalex's review of Exploiting Software, which aims to balance that situation somewhat.
Exploiting Software: How to Break Code
author Greg Hoglund, Gary McGraw
pages 512
publisher Addison Wesley Professional
rating 8
reviewer Alex Moskalyuk
ISBN 0201786958
summary Techniques and software used to attack applications.

What kind of secure are you after?

There are many published titles on the topic of software security are numerous, but most of them follow certain patterns. Building Secure Software by Viega and McGraw was mainly concerned with proper techniques and general software engineering mindset without going into specifics. Then there was Writing Secure Code , by Howard and LeBlanc, which provided concrete examples and showed the "right way" to do secure coding. I heard the title instantly became a required reading at world's largest software corporation. It's currently in its second edition.

Secure Programming Cookbook for C/C++ by Viega and Messier, was the hands-on title for those developing C/C++ application with security in mind, as the cookbook recipes generally gave examples of good code, with each chapter providing some general background information on the topic discussed (I reviewed it on Slashdot in September last year).

Just in case you were wondering, the list above wasn't just retrieved by a quick search at Amazon. My Master's degree, completed last summer, dealt with the topic of software security, and those are the titles I've read preparing to write the theoretical part.

From the other side

With the variety of books on how to write secure software, and what techniques to use to make existing software more secure, there was a niche for a book targeted specifically to those who wanted to break software. Black hat or white hat, the network security experts always had titles like Hacking Exposed to give them an idea of what was available in terms of techniques and methodologies used out there. For software security most of the articles and books generally would tell you something in the terms "do not use strcpy(), as it introduces buffer overruns".

Great, so I won't use strcpy(), did it make my application more secure? Is it more or less hack-proof? What if I am a tester and required to play with this aspect of the application to ensure the application's security before the product ships? Theoretically hanging out at proper IRC rooms and getting lifetime Phrack and 2600 subscriptions should be enough to cover you at the beginning, however, the learning curve here leaves much to be desired, let alone the fact you will probably be kicked out of the IRC rooms for asking n00b questions. Another path would be to take an expensive training course by someone with a name in the industry, but the price tag for those generally leaves out self-learners and those operating on limited budgets, which adds up to about 99% of software engineers and testers out there.

Exploiting Software to the rescue.

Exploiting Software fills the void that existed in this market. Eight chapters take you through the basics and some advanced techniques of attacking software applications with the purpose of executing arbitrary code supplied by an attacker (you).

The book mainly deals with Windows applications for x86 platforms, and some knowledge of C/C++ and Win32 API is required to go through the example applications. To automate some processes and demonstrate possible attacks the authors use Perl, so knowledge of that would help the reader, too. Some chapters, (e.g. the buffer overflow one) show disassembler output, and while you're not expected to read x86 ASM code as if it were English, knowledge of how the registers work and how the subprocedure calls are handled on this Intel architecture are required. After all, if potential attackers know it, you better familiarize yourself with some low-level code, too.

While discussing various possible attacks, the authors post different attack patterns. The patterns themselves usually appear in gray textboxes and talk about the possible exploit in general terms. After that, a series of attack examples follow, with specific descriptions on what can be done, and how. For example, the attack pattern on page 165 is titled "Leverage executable code in non-executable files." The following attack example is "Executable fonts," and it talks how the font files are generally treated by the Windows systems (they are a special form of DLLs). Thus it's possible to embed some executable code into a font library you're creating, for which the authors provide an example in Microsoft Visual Studio.

What's cool is that all the attack patterns are listed in a separate table of contents (alas, not on the Web site table of contents, which just lists the chapters and subchapters), so you can browse to the attack pattern you decide to learn about, read some general info about it and then study specific examples. The examples themselves are not in the table of contents, which I think is a mistake, as it would make searching for possible patterns much easier. After all, how are you supposed to know that "Informix database file system" (p. 189) is under "Relative path traversal" pattern? Well, unless you know specifically that the line http://[Informix database host]/ifx/?LO=../../../etc/ is the one discussed in the example, you would have to either go through the index hoping no omissions were made, or read the chapter in its entirety.

One of the best chapters of the book, Reverse Engineering and Program Understanding, which provides a good introduction into techniques used throughout the book, is available online from Addison Wesley. By having a free chapter you already have 1/8th of the book, but don't think that the low number of chapters makes this 512-page title an introductory book.

Target Audience

Looks like there are two major audiences and reading patterns for this book: those wanting to fix their systems ASAP and thus using Exploiting Software as a reference, and those using it as a text book to learn about security. I've discussed the organization of the book above, and the reference types will probably be more interested in patterns and examples. For a casual reader (although casual readers wouldn't generally pick up a title with C++, Perl, ASM and hex dumps spread around the chapters) this is a book with great educational value, from two authors who have discovered numerous security vulnerabilities themselves.

Exploiting Software is not an easy title to read. Addison-Wesley shipped me the manuscript copy a month before it hit the bookshelves in its final version, and I found myself going through about two pages an hour. The authors bring up sometimes unfamiliar Win32 APIs and occasionally use ready-made tools available on the Web, so generally I found myself visiting MSDN and Google a lot to read through available documentation and download the latest version of the tools used. The book doesn't come with a CD. Some of the stuff, like inserting a malicious BGP packet to exploit a Cisco router (p. 281) is not really testable at home, and I have some reservations about verifying the example with my employer's routers.

The book is probably apt for 2nd or 3rd year computer science students and above. Besides the variety of languages that I mentioned above, you need to be familiar with the basics of Intel architecture, and generally be fluent with terminology like "buffer," "stack," "syscall," "rootkit," etc., as this is not an "Introduction to..." title. From my experience, you probably won't read it from page 1 to page 512 understanding everything perfectly, but for anyone interested in security and those making a career in software development it looks like a bookshelf must-have.

I interviewed Gary McGraw on the current state of software security, the relevance of the topic to the issues beyond C/C++ and improper buffer usage, and future directions in security. Network World magazine also ran an interview with the McGraw in which he talks about the reception of the book at the RSA Conference, whether the economics is right to invest in building secure systems, and whether his book does more harm by providing a compendium of known exploits.


Alex has written numerous reviews of other software and security titles. You can read more of his opinions at his Web site. You can purchase Exploiting Software: How to Break Code from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Exploiting Software

Comments Filter:
  • poetry in motion (Score:5, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis@gmGINSBERGail.com minus poet> on Monday March 15, 2004 @03:12PM (#8570765) Homepage
    Why I love Bruce...

    "We wouldn't have to spend so much time, money and effort on network security if we didn't have such bad software security."

    Is to smart as

    "We wouldn't have some many crumbling roads if heavy vehicles didn't drive on them"

    Is to insightful. I still say the best way to experience Bruce's mind in action is in person. In his books he's trying to pander to the market [of let's face it less than apt people] and in person he's talking with fairly brilliant people [e.g. me ;-)]

    Tom
  • by IO ERROR ( 128968 ) <error@nOSpaM.ioerror.us> on Monday March 15, 2004 @03:18PM (#8570822) Homepage Journal
    This is where input validation comes in. Check every input value for sanity. Do something reasonable if the value isn't sane. How often have you forgotten to write error checking or input validation code? Do you check the return value from printf()? (yes, it has one) Every time? (I doubt it)

    Writing bulletproof software is TEDIOUS. You still have to verify everything, and still somebody's going to find the one thing you missed and exploit the hell out of it...

  • by Lucky Kevin ( 305138 ) on Monday March 15, 2004 @03:19PM (#8570834) Homepage
    We need to use more intelligent environments to protect us from ourselves (and other less good proogrammers :-)).
    Like the security manager in Java and the security "taint" stuff in Perl.
  • by Anonymous Coward on Monday March 15, 2004 @03:23PM (#8570867)
    Looks like someone flunked the analogy sectin on the SATs. Actually what he said is more like,

    "We wouldn't have to spend so much money fixing roads if we would just build more resiliant roads in the first place"

    Which is perfectly true. Sure it's not groundbreaking, but then it's not meant to be. The difficult job for quality insurance people is just to make people like you shut up and actually change the problem behaviors.
  • by Ytsejam-03 ( 720340 ) on Monday March 15, 2004 @03:25PM (#8570892)
    I would hope that no one lets a newbie coder get his grimy little paws anywhere near code that requires a careful consideration of security.

    Everyone writing code should be giving careful consideration to security. In my experience few developers do, but that number is increasing...
  • by SphericalCrusher ( 739397 ) on Monday March 15, 2004 @03:29PM (#8570928) Journal
    Exactly. Even though I may pick this book up for a good read, I can already say that a good 50% of hacking is not technical.

    The social engineer shows just how easy it is to obtain information from someone than it is to actually copy it from their computer. Just by dressing proper and knowing the correct lingo, you could easily masquarade as an employee for the company.

    Read The Art of Deception, by Kevin Mitnick. Great read indeed.
  • by ron_ivi ( 607351 ) <<moc.secivedxelpmocpaehc> <ta> <ontods>> on Monday March 15, 2004 @03:38PM (#8570998)
    From the article: " early computers were not designed to work in the network environment, and even most software written later was designed to work on benevolent networks "

    Ugh that's like saying "most office desks aren't secure since their locks are weak and can be drilled easily".

    Part of the problem is one of expectations. When people use insecure components and have unreasonable expectations that the'll magically be safe because one piece asked for a password.

    There's nothing wrong with components (desk locks or computers) that expect to be kept in benevolent environments.

    Next they'll be telling us Wikipedia's not secure because their password-checker is weak.

  • by GPLDAN ( 732269 ) on Monday March 15, 2004 @03:42PM (#8571036)
    Schnier seems to have made a career out of stating obvious truisms, like "all security is a tradeoff." I mean, I've read his books. Are these books really considered insightful?

    I don't understand the appeal. The Myth of Homeland Security by Marcus ranum is 100x the book that Schnier's is. Ranum actually worked in Washington, he not only shows where security breaks down, but the why of the politics behind it. In 100 years, professors who wish to study Computer Security at this stage of history will put Ranum's book on the syllabus and nobody will remember Bruce.

    I guess this all goes back to "Applied Cryptography". A book full of code showing how to code the encryption algorithms widely known about. Compiled from the Internet. It may have made a stink when it was published, but what's worse is Schnier rode the publicity all the way.
  • by redragon ( 161901 ) <codonnell.mac@com> on Monday March 15, 2004 @03:45PM (#8571053) Homepage
    Just in case you were wondering, the list above wasn't just retrieved by a quick search at Amazon. My Master's degree, completed last summer, dealt with the topic of software security, and those are the titles I've read preparing to write the theoretical part.

    It's kind of sad that a statement like this is even necessary. It's an interesting statement regarding what kind of qualifications are often necessary just to get a typical reader to give you credit for not being an idiot.
  • by maxwell demon ( 590494 ) on Monday March 15, 2004 @03:48PM (#8571072) Journal
    I think his point is that they are not in such an environment any more, due to the internet. That is, your office desk is now at some public place, with lots of people who'd really like to get in. A place for which it wasn't designed, and for which the security doesn't suffice any more.
  • by brandonY ( 575282 ) on Monday March 15, 2004 @03:52PM (#8571115)
    This is why tools like lint exist. Alongside about 1000 other useful things, lint tells you if you ever fail to check the return value of a function call. Sure, it's tedius to always check the return of printf, but it's necessary and to some extent it can be automated.
  • by tomstdenis ( 446163 ) <tomstdenis@gmGINSBERGail.com minus poet> on Monday March 15, 2004 @04:00PM (#8571211) Homepage
    You missed my point [usual for an AC]. The point is Bruce is 99% mouthpiece. People quote him because he knows how to use webmail [I've seen him use it personally, it was fascinating], has long hair and says things like "it's the ramifications of the draconian backbone we are all founded on."

    Seriously though... in person he's puts on a good show, has a sense of humour and more importantly knows when to turn off the media filter.

    Tom
  • by rixstep ( 611236 ) on Monday March 15, 2004 @04:05PM (#8571271) Homepage
    I find Windows of absolutely no technical interest. They took systems designed for isolated desktop systems and put them on the net without thinking about evildoers, as our president would say.
    - Bill Joy
  • I liked (Score:4, Insightful)

    by g0bshiTe ( 596213 ) on Monday March 15, 2004 @04:07PM (#8571301)
    Hacking: The Art of Exploitation
    It provided these same thoughts on software design, but also delved into more the ASM side of things. The book went on to state that "there is no such thing as secure code." I believe this statement to be true. With the current patch n sniff state of Windows, it is very easy to overflow a buffer to execute code. I have oft heard someone say my pc is unhackable, I run blah firewall, or X N.A.T. the sad fact is they are as easy to compromise as an unsecured network pc is. With the plethora of IE and other browser vulnerabilities out there you don't need to drive a tank through the front door. Seems though Microsoft left a Window open.
  • by Beryllium Sphere(tm) ( 193358 ) on Monday March 15, 2004 @04:11PM (#8571357) Journal
    >Schnier seems to have made a career out of stating obvious truisms

    Evidently they're not obvious to everyone. If you've been through an airport in the last couple of years, or used a mass-market network-enabled software product, or looked at the security advice given by newspaper columnists, you're forced to conclude that the world needs "Beyond Fear" more than it needs Blowfish.
  • by Greyfox ( 87712 ) on Monday March 15, 2004 @04:14PM (#8571400) Homepage Journal
    I did security auditing in a standard C library in a previous job. We wrote customized automated test for every freaking C library function. Not only did we document potential side effects from each one of those functions, we could run the entire test suite whenever modifications were made to the library to insure that everything still worked as expected. That job was a real eye-opener, let me tell you...

    Little things can make a big difference too. Let me give you a hypothetical; Lets say the AIX standard C library strlen() tests its input to make sure it's not a NULL pointer prior to testing the string. Lets further say that the Gnu C library doesn't make this test. Recompiling your AUX application on Linux would potentially introduce crashes whenever your application encrounters a strlen.

    While the above was a hypothetical situation, I have uncovered a good many memory overflows and leaks simply by compiling and running an application on a different flavor of UNIX than it was originally written for. Having safe underlying library calls is nice, but it also introduces the possibility that actual errors will go unnoticed for a longer period of time.

    I'm pretty well convinced that in a situation where the need for security is high (Say, for example, an OS kernel) documented testing of every single function that makes up the software is a necessity.

  • by Tjebbe ( 36955 ) on Monday March 15, 2004 @04:16PM (#8571428) Homepage
    And here we see that analogies, like code, are hard to get right the first time (or the second).
  • by Beryllium Sphere(tm) ( 193358 ) on Monday March 15, 2004 @04:19PM (#8571480) Journal
    >C does not have that overhead unless you add it. I don't think that adding another layer is the solution.

    This is a healthy attitude to take while you're writing code. I produce better code when I try to convince myself that bug-free code is attainable.

    If you're designing entire systems, it's safer to take the realistic view that human error occurs at predictable rates in the most highly trained people. Highly trained people are still vulnerable to getting tired, distracted, misled, and to fat-fingering things.

    The question is whether you're willing to spend CPU cycles (the cheapest thing in the world) to reduce the scope for human error (the most certain thing in the world).
  • by Brandybuck ( 704397 ) on Monday March 15, 2004 @04:20PM (#8571493) Homepage Journal
    There's another side to the problem. It's insidious. And while Microsoft is fully embedded in this tar pit of insecurity, Open Source projects are rarely better.

    This problem is "feature requests from users." If very few developers understand security well enough to write secure code, think about how much less end users know. Yet it is the end user who pays us. They're our ultimate boss, even on the free-beer Open Source side of things.

    At work I've had feature requirements come to me from marketing that would absolutely eviscerate the product's security. I've also seen bug reports elevated to top priority that that would reduce or eliminate product security.

    Here are some hypothetical (I hope) examples to show the dangers of this in the Open Source arena. While some of these might have been absurd a few years ago, with today's hyper-concern of usability, it wouldn't surprise me if they actually got implemented.

    "It's too much work changing file permissions by hand, so we need a way to automatically execute arbitrary files."

    "It's too much work remembering passwords, or remembering the master password for a password manager, so there needs to be a daemon running that will remember for us."

    "Messages in XYZ email client should be automatically rendered in HTML/CSS/Javascript."

    "The interface is too cluttered! Hide file name extensions!"

    Or my all time favorite...

    "Linux needs a InstallShield clone!"
  • by lelitsch ( 31136 ) on Monday March 15, 2004 @04:20PM (#8571496)
    I don't know why this got voted insightful, but you are making a very good point, although involuntarily:
    "Security WON'T work until software engineers and programmers get it into their heads that complicated, invasive security procedures don't work if there are any humans around."

    If the security procedures are transparent and easy to use from the user standpoint, they will expend and extraordinary amount of ingenuity and cunning to get around them. Usually more than any product designer can spend to develop the product in the first place. This doesn't only aply to software, but to everything else. If you put a different combilation lock on each filing cabinet (very secure), your office workers will tape a list to the bottom of one desk with all the combinations. If you put a different lock on every door, they'll duct tape over the bolts to keep them from engaging.

    The same applies to software. Get over it and develop a protocol that doesn't hurt the user and is secure. It's hard, but not impossible.
  • by John Whitley ( 6067 ) on Monday March 15, 2004 @04:23PM (#8571518) Homepage
    Further complicating the problem is that even if someone were to develop an environment that attempted to prevent all of the problems caused by programmer errors, it would be horrendously complex and would likely kill performance.

    IMO, a big part of the solution is factoring out solutions for major known security problem areas into the environments, languages, and frameworks that developers use on a day-to-day basis. E.g. if you're using a language with robust automatic memory management, there's little reason to go looking for C-style buffer overflow exploits coded by your developers.

    In today's environments (e.g. Windows and current *nix systems) with current popular languages (e.g. C, C++) we're at a big disadvantage. Much of the discussion in this thread presumes that coders can/should amass total knowledge of all levels of security exploits, from binary code injection to cross-site scripting (aka CSS), SQL injection, etc. It becomes overwhelming to a dev who really should be able to focus on the value-added problems at hand. I'm aware of only one cost efficient approach: choose environments, languages, and/or tools that mitigate known security risks.

    Where applicable, this can be done by leveraging environments that can limit the scope of attacks. See SELinux [nsa.gov] and GR Security [grsecurity.net] for ways to patch Linux to meet thess needs, or the EROS project for a fresh view of OS security and compartmentalization models. Environment choise is most relevant to folks providing networked services, where they can control the platform specifics.

    The cause can also be aided by using languages/frameworks that encapsulate security knowledge. This can be as "simple" as using a language with automatic memory management(to factor out common buffer overflow problems), or along the lines of using scripting frameworks that standardize policies for correctly managing more complex security issues (e.g. cookie management, web input/output validation, CSS issues, etc).

    I'd argue that it is possible to improve software security practices significantly simply by careful choices of tools and techniques available today. But it takes a saavy organization to really commit to providing secure software solutions, and to be able to do so in a cost effective manner. As always, the hard part of the equation is programming the wetware. 8-)
  • by RLW ( 662014 ) on Monday March 15, 2004 @05:03PM (#8572018)
    Every function should either check all inputs for valid states and conditions or they are hidden functions that have this done for them. If they are hidden they are never exposed as an API or even beyond the class or compilation unit they come from.

    As far as disasters all out of proportion to the cause, computers are famous for that. There are cases where I've seen very small changes in the code base result in huge messes. This is usually related to pointers or boundary conditions(inputs approaching, or on, or just past the extreme ranges of valid inputs).

    The problem isn't that programmers are any more lazy than the architects. The reason buildings don't fall down from minor stresses is because when they do, people tend to die. Over the course of human history architects have made monumental mistakes that have cost the lives of many, many people. Two things have come as a result of these failures.
    1) Materials knowledge: knowing just how much stone, brick, wood, etc. is required to hold up a given force over a given span under specific conditions. Mercifully in this age most of this information comes from intentional failures in stress labs. Even so, modern building failures still happen.
    2) Building Code: Given what is know from 1, codes are in place to make sure that architects know what is required to make their buildings safe. Even here though the codes are not perfect and they change fairly regularly.

    People make mistakes. Sometimes these mistakes are made in code. When that happens the computer replicates these mistakes unerringly. Sometimes with frightening rapidity.
  • by Knetzar ( 698216 ) on Monday March 15, 2004 @05:55PM (#8572571)
    perror()?
  • by Anonymous Coward on Monday March 15, 2004 @07:30PM (#8573481)
    It's only insulting if your ego is too big.
    Personally, I'm glad there are tools out there than can help me to prevent fucking up. Because that's what I'm good at. Fucking up. Just like all the other humans out there. Even the ones who don't think they fuck up.
  • by Random Walk ( 252043 ) on Tuesday March 16, 2004 @04:45AM (#8576523)
    Splint relies heavily on annotations within the code. This has two major consequences:
    • it is extremely tedious to use if you have not used it right from the beginning of your project
    • it will tell you that your code is OK, if you (via annotations) tell it that the code is OK ... but what if your annotations are incorrect ? I think it just moves the problem from writing correct code to writing correct annotations.

Always look over your shoulder because everyone is watching and plotting against you.

Working...