Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Books Media Book Reviews IT Technology

Working Effectively with Legacy Code 208

Merlin42 writes "I recently took a Test-Driven-Development (TDD) training course and the teacher recommended that I read "Working Effectively with Legacy Code" by Michael Feathers. First things first, a note about the title. Feathers defines "Legacy Code" a bit different than you may expect, especially if you are not into the XP/Agile/TDD world. I have heard (and used) a number of definitions for "legacy code" over the years. Most of these definitions have to do with code that is old, inherited, difficult to maintain, or interfaces with other 'legacy' hardware/software. Feathers' definition is 'code without tests.' For those not into TDD this may seem odd, but in the TDD world, tests are what make code easy to maintain. When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything." Read on for the rest of Kevin's review.
Working Effectively with Legacy Code
author Michael Feathers
pages 456
publisher Prentice Hall
rating 9/10
reviewer Kevin Fitch
ISBN 978-0-13-117705-5
summary Excelent overview of how to apply TDD to an existing project
Overall this is definitely an interesting read, and useful to anyone who has ever yelled "FSCKing LEGACY code!" It will be most useful to someone who already has some appreciation for TDD and wants to use it to 'pay down the technical debt' in a legacy code project. In my opinion adding unit tests (a sort of retroactive TDD) is the best ... err ... most effective approach for getting a legacy code project into a more malleable state.

One caveat is that most of the book is focused on working with object oriented programming languages. There is some coverage of techniques for procedural languages (mainly C), but this is not the main focus of the book. In a way this is unfortunate, since there is a lot of really useful C code out there gathering dust. But in the book he states that "the number of things you can do to introduce unit tests in procedural languages is pretty small." Unfortunately I would have to agree with him on this point.

One of the greatest things about this book is that it is written by someone who has worked with a lot of legacy code, and there are numerous real world anecdotes sprinkled throughout the text that really serve to help drive the points home. The code examples are plentiful, but not verbose. They all look like real code you might find lurking in a dark corner at work, not some fanciful made up snippet.

The high level goal of the book is show you how to write good unit tests for code that wasn't designed with unit tests in mind. The first step for writing unit tests is getting individual classes or functions into a test harness where you can apply known inputs, and check the outputs or behavior. To do this you need to break dependencies in the original code. The bulk of the book is dedicated to looking at different approaches to breaking dependencies.

Much of the book is organized like a FAQ. There are chapter titles like: "I Need to Make a Change. What Methods Should I Test?" and "My Project Is Not Object Oriented. How Do I Make Safe Changes?". This organization makes the book work a bit better as reference than as learning material. After the first few chapters there is very little flow to the book. Each chapter tends to stand as an independent look into a particular problem common in legacy code. As a result, you can read the table of contents and usually skip to a self-contained chapter that will help with the problem at hand.

The final chapter of the book is a listing of all the refactoring techniques used throughout the rest of book. So if you have a particular dependency-breaking technique in mind, you can skip straight to the description of the technique you want to use. This can be quite helpful when you need to perform a refactoring before you can get your code into a test harness. The descriptions are straightforward and provide a little checklist at the end that will help you make sure you didn't miss anything.

In conclusion I would definitely recommend this book to a colleague who is trying to introduce unit tests into code that was not designed with testing in mind. In fact I have already lent the book to several people at work, most of whom have bought their own copy.

You can purchase Working Effectively with Legacy Code from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Working Effectively with Legacy Code

Comments Filter:
  • Not needed (Score:5, Funny)

    by eln ( 21727 ) on Monday September 29, 2008 @11:23AM (#25194819)

    This book is a waste of paper. Everyone knows the proper way to deal with legacy code:

    1.) Spend 2 weeks looking at code you don't understand.
    2.) Loudly complain about the poor quality of the code, particularly algorithms that you don't understand.
    3.) Make derogatory comments about the previous developers. Be sure to paint them as monosyllabic imbeciles who probably got dropped on their heads multiple times as children.
    4.) Make minor changes to the code. If they blow up in your face, blame the previous developers for their poor grasp of basic programming practices. Make references to the previous programmers' relationship with their mothers.
    5.) Delete the whole thing and start from scratch.
    6.) 18 months of fumbling around later, realize that the previous code may have been better than you gave it credit for.
    7.) Deny this.
    8.) Release cobbled-together mess that lacks half the features of the previous codebase and features twice the bugs.
    9.) Get job elsewhere.
    10.) Company hires new programmer who starts the process over at step 1.

    • Re:Not needed (Score:5, Insightful)

      by jellomizer ( 103300 ) on Monday September 29, 2008 @11:25AM (#25194863)

      You work at GE right?

    • by blindd0t ( 855876 ) on Monday September 29, 2008 @11:38AM (#25195005)

      ...release it. ;-)

      • Re: (Score:3, Funny)

        Damn it! I told you not to go around giving out our release process! That's company-proprietary information! *throws chair* You're fired! I'm gonna fscking KILL blindd0t!

        -- Steve Ballmer

    • Re: (Score:3, Funny)

      by Fizzl ( 209397 )

      Now that the statute of limitations has run its course, I can safely admit that this sounds really much like my first "professional" project. Too much responsibility for the inexperienced.

    • Re:Not needed (Score:5, Insightful)

      by jellomizer ( 103300 ) on Monday September 29, 2008 @11:49AM (#25195105)

      I have goten you sarcasm, however I feel some people may miss it, so I will comment on these ideas as if you were serious, as this is actually more like real life then most people want to admit.

      1. Trying to analysise the code is a lot of extra work not needed as you can take it for granted that it works correcly and you just need to focus on what doesn't. For most apps a quick search for the button or menu item that is causing the problem will allow you to trace you way to the module and the area that needs to be fixed.

      2., 3. and part of 4. Remember people are people. I bet you even write some bad code from time to time. Either you are really tired, under a deadline, or had to work around an other bug that may have been long since fixed, or make the code so optimized that it is unmaintainable. We all do it, most of us won't admit it. So remember that when you are about to critize someones elses code. As well you need to reference the code quality to the time it was made. Look at code in the 1980's it was usually written by people with out Computer Science Degrees, so there will be Goto and the like.

      4. Making minor changes when possible is a good method. However if it blows up then you will need to make a larger change... For legacy apps your job isn't as much fixing a bug, but the user of the application has a different process that you need to adjust the computer to account for. The process may have worked for 20 years, and it worked. But it changed sometimes you can get away with a little tweak but sometimes it requires more.

      5., 6. Starting from scratch could kill the company. Or be way to expensive. It has been working for 20 years and just needs some minor tweaks, yes maintaining it takes a bit more work then before but it could cost millions (not just programming time, but change management, training, research, bug fixes, missed area....) vs. Paying some guys $100k a year (taking decades to recover the cost of the inital effort)

      7. If you admit to the failure you may be able to get the legacy back and running, you may still have a job, although you may get some angry bosses for a while. However you made a mistake it is better to admit it then go down the path of distruction.

      8. If you did go threw the full rewrite process you should have put more effors in specing it out, and giving a clearer quote. And accounted for bugs to be fixed.

      9. If you messed up to much, sometimes getting a new job is not that easy. Your reputation can spread.

      10. Managers should have learned from the mistake and not allowed the new developer to do the same things.
       

      • Comment removed based on user account deletion
      • Re: (Score:3, Funny)

        by dcollins ( 135727 )

        "Look at code in the 1980's it was usually written by people with out Computer Science Degrees, so there will be Goto and the like."

        Look at my code from yesterday, it was written by a person without a Computer Science degree. There was a Goto and the like.

        • Goto's aren't always bad.

          • Probably 10% of the ugliest code I read is mangled *precisely because* someone went out of their way to avoid using a goto when it really was the appropriate control structure. They couldn't even tell you why they did it, just that they heard somewhere to avoid goto. They probably think that it's a performance hazard or something.

          • by mrjb ( 547783 )
            Goto's aren't always bad. Of course they aren't. But beware the velociraptors.
    • Re:Not needed (Score:5, Insightful)

      by Greyfox ( 87712 ) on Monday September 29, 2008 @11:50AM (#25195111) Homepage Journal
      Good management usually reacts to calls to rewrite with skepticism. Usually (but not always) this is a good thing.
    • 11.) ???
      12.) Cthulhu

  • Whatever (Score:3, Informative)

    by Anonymous Coward on Monday September 29, 2008 @11:24AM (#25194857)
    Buy Martin Fowler's Refactoring [amazon.com] instead.
    • by Anonymous Coward

      Did you just refactor his review? BRILLIANT!

    • Re: (Score:3, Insightful)

      by CharlieG ( 34950 )

      No - buy BOTH books, they really do compliment each other

    • Re: (Score:2, Insightful)

      by regeb ( 157518 )

      Buy Martin Fowler's Refactoring [amazon.com] instead.

      Remember that in Feathers' book "legacy" means not to have unit tests.

      Refactoring starts with the assumption that unit tests are in place. The challenge with legacy code is that very often its current structure makes it impossible to write unit test for it. This book is about techniques of safely transforming untestable code to a form that is testable.

      Only after that come actual unit tests, and after that refactoring.

      All in all, the two books are complimentary.

  • Put all your changes in "int main()", use obscure variable names like xspatyc05 or funct123, always use static buffer sizes for any IO operations and under no circumstances should you add comments, it's a waste of time and no one besides you is ever going to have to understand it anyway.

    - I <3 Legacy code
  • Not exactly... (Score:5, Insightful)

    by Kindaian ( 577374 ) on Monday September 29, 2008 @11:27AM (#25194893) Homepage

    The simple passing of all tests doesn't necessarily means that you didn't broke anything.

    It means only that you passed the tests.

    If the tests don't provide coverage for ALL the business issues that the piece of software is supposed to solve, then you pass the tests, but will have no clue if you broke or not things apart.

    Best approach is to evaluate current test procedures and check if they provide enough coverage for at least all user related actions and all the automated actions.

    Only after you know that your testing procedures are sound, you can have that assurance... ;)

    • The simple passing of all tests doesn't necessarily means that you didn't broke anything.

      It means only that you passed the tests.

      If the tests don't provide coverage for ALL the business issues that the piece of software is supposed to solve, then you pass the tests, but will have no clue if you broke or not things apart.

      Here here.

      Test should also be developed at various levels of implementation. Both unit tests and integration tests are necessary.

      I've worked with code that was fairly well covered by integration test, but which had little to know unit tests (due to the various modules being overly tightly coupled. It's very very difficult to reproduce issues in that case.

      At the same time, I also feel that the term legacy should probably be used a bit more literally.

    • Re: (Score:3, Informative)

      by plover ( 150551 ) *

      If the tests don't provide coverage for ALL the business issues that the piece of software is supposed to solve, then you pass the tests, but will have no clue if you broke or not things apart.

      That's not true at all. You have a lot of clues based on the tests that passed. You'll have confidence in the code that passed the tests. Overall, it's less to worry about. It's certainly of more benefit than no tests. And if you failed tests, you'll have 100% confidence that you broke something, and you can get it fixed.

      Consider the end goal of unit tests would be 100% assurance that all code paths are covered, and that all behavior is tested. Not that it's realistic in many places, but that is th

    • by k.a.f. ( 168896 )

      The simple passing of all tests doesn't necessarily means that you didn't broke anything.

      It means only that you passed the tests.

      If the tests don't provide coverage for ALL the business issues that the piece of software is supposed to solve, then you pass the tests, but will have no clue if you broke or not things apart.

      Of course it doesn't prove that, but remember that this is fundamentally impossible to prove anyway. Testing can demonstrate the presence of errors, but not their absence. (E.W. Dijkstra) Your tests can never be "enough" in that sense. But a code change that re-satisfies the existing test suite is at least much more likely not to break the system than one which doesn't.

  • by cbowland ( 205263 ) on Monday September 29, 2008 @11:37AM (#25194985)

    A legacy system is anything that is in production RIGHT NOW. My coding philosophy has always been "building tomorrow's legacy systems today."

    • by Kjella ( 173770 )

      A legacy system is anything that is in production RIGHT NOW.

      Wait, so an application goes right from beta to legacy? (Or worse still, it's beta and legacy at the same time?)

      • A legacy system is anything that is in production RIGHT NOW.

        Wait, so an application goes right from beta to legacy? (Or worse still, it's beta and legacy at the same time?)

        Sound like gmail.

    • Close, but not quite. Legacy software is software which has entered into the "Long-Term Support" phase of development. In other words, software which is no longer targeted at new installations, but which remains in use and must be supported and maintained.

      Software generally enters the "legacy" phase once a newer version of it has been released for general use.

  • by drDugan ( 219551 ) on Monday September 29, 2008 @11:49AM (#25195103) Homepage

    I push back on this mentality each time I see it from the agile crowd: (FTA/review)

    "When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything."

    No. (testing FTW and all, but lets get real)

    Tests are *helpful*. Multi-user development beyond 2 people accelerates with good tests. Maintenance long term is easier with tests. Changes happen faster and are more robust with good tests. However, tests are extremely difficult to write well and almost impossible that cover all the possibilities for future changes while also telling future programmers automatically when something doesn't work. I think that the best one could say is this:

    When a comprehensive set of great unit tests are in place, then code can be changed at will and the tests will help the programmer understand if they broke anything. Test will often tell you automatically about things that are obvious, and usually would be seen with the most basic release testing. The art of writing good tests is understanding the subtle points of how your code functions and the pitfalls future developers may trip over when they extend what you did.

    • Re: (Score:2, Insightful)

      by Precipitous ( 586992 )

      Your argument seems plausible, until you have actually seen the difference in products developed with test-driven / unit test first approaches. The benefits are not what you think they are. I would agree that unit tests are not a panacea, but disagree on

      1) The are essential, not just helpful, at least, if you intend to produce software that works.
      2) Unit tests do not need to be comprehensive to be useful. They don't need to be great, but great helps.

      I tend to agree with you on 1) unit tests are not integrat

      • Re: (Score:3, Insightful)

        1) The are essential, not just helpful, at least, if you intend to produce software that works.

        That clearly isn't true. Arguably the most robust software in the world is produced using the Cleanroom approach, which is almost the antithesis of TDD. Of course the typical constraints for the kind of development project that uses Cleanroom are rather extreme, but that doesn't make them any less valid a counter-example.

        I tend to agree with you on [...] 2) No TDD or agile expert would include the word "anything" in this statement "When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything"

        One of the problems I have with these "experts" is that while they may not actually say that, a lot of them certainly give that impression to people learning from them by not explicitly cor

        • You in fact made a perfect case for the argument that the code SHOULD have been developed test first, because then all the interdependencies you describe would not exist. Instead your application would be properly designed to be composed of a number of modules which operate independently of each other and only expose well defined interfaces which are ALL tested.

          It may well be that when a complex application is being developed you may find that there are several 'layers' of abstraction, and testing some of t

          • You in fact made a perfect case for the argument that the code SHOULD have been developed test first, because then all the interdependencies you describe would not exist.

            That's lovely, but what if the dependencies are implied by the requirements, rather than introduced artificially through insufficiently modular design? It doesn't matter what development process you follow, you still don't get to rewrite the requirements when they are inconvenient.

            • In my long experience of writing complex applications I have yet to encounter a situation where the requirements forced me to write code which wasn't modular. I am not at all sure what the conditions would be where that is true.

              I can only surmise that the model/metaphor was poorly chosen or not well mapped onto the requirements. This kind of thing is one of the big issues with strict 'top down' waterfall style development, and one of the reasons it is slowly but surely being abandoned. The expectation is th

              • I can only surmise that the model/metaphor was poorly chosen or not well mapped onto the requirements.

                Well, OK, perhaps you've never experienced it, but that doesn't mean it doesn't happen. What industries (in terms of application domains) have you worked in? The sort of thing I'm describing is not unusual when doing serious mathematical modelling and scientific data analysis, for example, where the underlying mathematics may be non-trivial, particularly once floating point and numerical analysis issues get into the picture.

                Of course you still try to make the code as modular as possible, but if your basic b

      • by jgrahn ( 181062 )

        You see a side benefit quickly: In order to get code into true unit tests, each module has to do something meaningful on its own.

        How is this a benefit? Normally, noone cares what the module does except its intended work.

        You avoid the massive stink of excessive interdependency that paralyzes much OO code. (Much of Feather's book is about how to break those dependencies in order to test code).

        Excessive interdependency doesn't sound like much fun ... but neither does excessive independency. If the applicatio

      • Re: (Score:2, Informative)

        I agree with your points, but I think everyone here is missing a huge benefit of writing tests: by necessity it forces good design in and of itself because it's near impossible to test anything with with complex behavior. The result is that just to make the unit tests feasible, developers stop writing enormous monster classes without interfaces and start fragmenting things down to small units that do just one or two things with well defined behaviors.

        You see a lot of people complaining that tests are "too

    • Re: (Score:3, Insightful)

      by Drogo007 ( 923906 )

      As a long time Tester (10+ years) and Programmer, I'm going to go one step further:

      Writing GOOD tests is HARD.

      First you have to think through the use cases, business logic, etc etc etc

      Then once you have the tests written, stop and think: Who is going to test that the code you just wrote (unit tests) is actually doing what you think it's doing.

      I write test code for a living, and test code still scares the crap out of me for the simple reason that there's no verification happening on the test code itself apa

    • by artg ( 24127 )
      Tests are, indeed, useful. But the idea of using tests as anything other than confirmation of function would be seen as the dark ages in any other branch of engineering. The fact that it's seen as a new concept in software is illuminating ..

      In civil engineering, a failed test means major rework. Tests are performed either as research or as a confirmation of safety. In the latter case, they are not expected to fail.

      In production engineering, tests were once performed to filter the good builds from the
      • Writing software is akin to designing a process. It is 100% design; the implementation only occurs after the software has been written and released, when it runs in a production environment. Unit testing is similar to building models or prototypes of a component or subsystem to verify that things fit together as intended. Integration testing is like building a full-scale prototype to study its overall characteristics, with the results fed back into the design process.

        Only when the design is complete -- incl

      • Software is not like hardware. Software is much more plastic and it is very often not obvious what the best design is. TDD allows you to build your components one small step at a time, this is the heart and soul of agile development. It also has to integrate with other principles like having a good model/metaphore. If the model is well thought out, then small low level code modules can be developed to adhere to a fairly simple and straightforward set of tests. This feeds into the concept of short iterations

      • In production engineering, tests were once performed to filter the good builds from the bad. This was far too expensive, and largely explains the success of Japanese industry over British and American in the 70s and 80s.

        No, what explains the success of the Japanese was process improvement - the Deming management method made their stuff into works of art and reliable to the point where testing was redundant.

        Reworking production because of errors costs a huge amount of effort by skilled personnel

        Which is rather different from software development; reworking prod can mean recoding some stuff and compiling, depending on when it's found.

        We like to think that a good software engineer designs and implements correctly and gets it 'right first time' and then the software can be duplicated indefinitely. In exceptional cases, this is true. In many cases, especially where a team is involved or the unit has to work in the context of other software, it isn't : the situation is much more like that of an assembly line.

        A good software does get it right the first time, provided that specs are clear (they usually aren't). Software isn't really an assembly line, it's more of a conversation, and getti

  • by wandazulu ( 265281 ) on Monday September 29, 2008 @11:54AM (#25195165)

    I have some legacy code that straight-up doesn't work; it makes references to non-existent proprietary libraries, uses classes that aren't defined anywhere, and just to make things more interesting, a lot of methods with a lot of code, and variables carefully instantiated, that are never used.

    This is what is checked into source control; there is a binary that does, in fact, work, based on this code (or some better flavor of).

    What to do then? There is some pretty involved financial algorithms in there that were designed by a mathematician and both the original developer and the mathematician have long since left the building. Yet, here I am, with a bug report that one of the models is wrong, and have absolutely no way to fix it.

    An earlier comment suggested that the "real" way to was to decry the original author's skills, parentage, etc., and just re-write. Frankly, this seems to be my only option at this point.

    • by Surt ( 22457 )

      Carefully instantiated variables that are never used are often changing global state. Having the variable makes debugging easier in some contexts. Not to say this is the explanation for your situation, but it is a reason you'll see that some times.

    • by j-pimp ( 177072 )

      I have some legacy code that straight-up doesn't work; it makes references to non-existent proprietary libraries, uses classes that aren't defined anywhere, and just to make things more interesting, a lot of methods with a lot of code, and variables carefully instantiated, that are never used.

      This is what is checked into source control; there is a binary that does, in fact, work, based on this code (or some better flavor of).

      Look into reverse compiling. Depending on the source language, this might be feasible. If not, disassembling and linking the code in might work in a pinch, but you will need to get the code to compile or rewrite it.

      Suing the original developers might be an option. This is flat out negligence. By threating to do so, these developers might be willing "find" the missing source code.

      • > Suing the original developers might be an option.

        Not if they were employees of the company now trying to use the code. In the US an employer's only recourse against an imcompetent employee is to fire him.

        > This is flat out negligence.

        Yes, the company was negligent in allowing this to happen. Perhaps some managers need to be fired.

        > By threating to do so, these developers might be willing "find" the missing source
        > code.

        I saw nothing in the article about deliberate malicious sabotage. That's

    • I have some legacy code that straight-up doesn't work; it makes references to non-existent proprietary libraries, uses classes that aren't defined anywhere, and just to make things more interesting, a lot of methods with a lot of code, and variables carefully instantiated, that are never used.
      This is what is checked into source control; there is a binary that does, in fact, work, based on this code (or some better flavor of).

      Looks like you need to get some code that works to start with - the decompiler might be a good approach.

      There is some pretty involved financial algorithms in there that were designed by a mathematician and both the original developer and the mathematician have long since left the building. Yet, here I am, with a bug report that one of the models is wrong, and have absolutely no way to fix it.

      Tell your boss about the problem - it's going to take longer than it should for a bugfix and you want him on board to set expectations.

    • Yet, here I am, with a bug report that one of the models is wrong, and have absolutely no way to fix it.

      Have you looked in the code to see if you can find where this math is? If so, you might want to try to verify that these is, in fact, a bug in the code if you haven't done so. I often treat new bugs found in old code with a grain of salt, as the code has been around for a while which means that its behavior is more of a known. If this model is not a rarely used feature, and the software has been around

  • >> then code can be changed at will and the tests will tell automatically you if you broke anything

    *Old legacy dev's pessimistic evil smile*
    All right, young man, now please refer me to a book where they have a way to write tests to automatically correct my errors while I drink coffee!
    That'd be something. Return when you found one.

  • by dwheeler ( 321049 ) on Monday September 29, 2008 @11:55AM (#25195171) Homepage Journal
    I have this bumpersticker posted on my office wall: "Building the Legacy Systems of Tomorrow [blogs.com]". I'm not sure who created that phrasing - or the bumper sticker - but I like it.

    In short: if it runs, it's a legacy system.
  • Feathers' definition is 'code without tests.'

    Funny, my definition of legacy code is "code without documentation". If I have documentation for what the code is supposed to do, I can write tests myself. If I don't have documentation, tests won't save me.

    • Re: (Score:3, Insightful)

      by Surt ( 22457 )

      The really nice to have is where the tests and the documentation are unified, making it impossible for them to diverge. That's what we've built our process around, and it works amazingly well.

  • encapsulation (Score:5, Informative)

    by Dan667 ( 564390 ) on Monday September 29, 2008 @12:07PM (#25195279)
    The most successful strategy I have had for legacy code that I have inherited is encapsulation of the old code into a new framework. I first attempt to build a black box wrapper with an API for what ever the legacy code did (wrap 5000 line loops, etc). Then as I can or need to change it, I take the black box and break it into proper libraries or readable functions (or start over). Have been able to do this for some really large bases of code and have a working system while I re-factored the mess a little at a time.
  • "Say, Ivan -- does your code have tests?" "Nope, it's Legacy Code". "Is it debugged?" "Nope -- legacy." "Does it work?" "Look, I already told you: IT'S LEGACY CODE. GET OFF MY BACK."
  • When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything.

    Away vile Panacea!

    Keep thy sticky tentacles off management soft and pliable brain!

    Ye shall shall not destroy another project schedule with your false promises and soul sucking stupidity!

    Begone wretched creature!

    Live out your days off of the decaying pulp of so many piles of wasted trees and the scraps tossed to you by management consultants!

  • by natoochtoniket ( 763630 ) on Monday September 29, 2008 @12:45PM (#25195729)

    Testing cannot detect errors with probability significantly greater than zero, unless the system under test is trivially small. For a system that has N interacting features, the number of test cases that are needed to "cover" all combinations of features is O(2^N). And, that is assuming the simplest possible features that are either used or not used in each case. If any features have complicated (more than one bit) inputs, the base of that exponential complexity function increases.

    While tests are helpful to detect implementation errors, test sets cannot be complete for nontrivial systems. And because testing cannot be complete, it can never provide sufficient verification. That is a basic fallacy of test-driven development, and of a-posteriori testing generally.

    The least-cost way to prevent bugs that will be noticed by users is to avoid making them in the first place. Requirements and designs can be documented, checked, reviewed, communicated, and (most importantly) read and referenced during subsequent phases and iterations of the development process. Test plans and test scripts can be part of that process, but cannot replace the requirements and design phases.

    Cost-driven managers don't like to hear that, though, because they think testing is cheap. Non-automated testing can often be done by cheap and easily-replaced labor. And automated testing is essentially free after the test software itself is developed and verified. (Notice, though, that developing the tests also involves requirements and designs, and increases the total amount of software that must be developed.)

    So, the least cost development process involves some reasonable amount of testing, but also involves requirements and designs, and reviews at every step. The only way to defeat the combinatorial explosion is by applying heavy doses of "thinking" and "understanding". Nothing else works as well.

    • Re: (Score:3, Insightful)

      by hondo77 ( 324058 )

      And because testing cannot be complete, it can never provide sufficient verification.

      That's like saying that seat belts can't save your life in every car accident so they're not worth wearing at all. Unit testing is but one tool in a developer's toolbox. It is not an all-encompassing solution to all of a project's ills.

      • That's like saying that seat belts can't save your life in every car accident so they're not worth wearing at all.

        No, it's like saying that because seat belts won't save your life in every car accident, it's pretty dumb to drive around as if having an accident won't matter just because you're wearing a seat belt.

  • by TheGeneration ( 228855 ) on Monday September 29, 2008 @12:49PM (#25195755) Journal

    Not once, EVER have I worked on code where the unit tests broke because of a bug or a mistake. Instead the unit tests break because the new code has something the test didn't anticipate. This is especialy the case in Easy Mock and TestNG.

    Maybe if you're working in a system with complex interdependence patterns, but generally it's a waste of time and money and just a management level masturbatory exercise foisted on engineering.
    ("I'm a super CTO of Cisco System! I'm going to force unit testing across the board, even where it doesn't makes sense! I'm going to be super ISO certified and John Chambers is going to lick my balls after his retirement when I become CEO!")

  • When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything.

    Unless there is a bug in the unit test code, or some condition the unit test designer didn't anticipate... oh wait, you said "good unit tests" -- have any of these actually been observed in the wild? I have yet to see a unit test simulate what my 7-year old daughter does best -- clicking wildly all over the place until something crashes.

  • Legacy Code (Score:4, Informative)

    by Orion Blastar ( 457579 ) <orionblastar@@@gmail...com> on Monday September 29, 2008 @01:04PM (#25195939) Homepage Journal

    I have had good luck with legacy code, here is what I do:

    #1 Figure out what the code does and document it with comments and write a document on it.

    #2 Identify variables and objects and what they are used for and any naming convention the code may use.

    #3 You need to stick to the original style of writing or rewrite parts of it into your style if it can give a performance boost or make it more stable.

    #4 Try to find programming errors and things that do not make sense and rewrite them so that they make sense. Do error trapping and check for nulls and letters entered into number variables and all other sorts of things most legacy programmers overlook.

    #5 Work to make the code stable and not crash and run faster before you start adding new features to it. Users don't want to wait 15 minutes to do a report and then have the program crash after their wait.

    #6 Work with the help desk to identify the most serious problems that users complain about the legacy code. Make it a "wish list" and then fix each complaint as you have time to do.

    #7 Get direction from your managers, tell them what you are trying to do and any problems you have. You need to work as a team with other developers, the help desk, managers, and users to work out the issues with legacy code. Explain to them when you need more time and cannot make the schedule they gave. Make a deal with them to release a stable version but lacking features that might take more time than they thought to do. Tell the users you had to no add in those features to meet a deadline or ask them if they want to wait until you figure out how to add in those features.

    #8 Play Sherlock Holmes and read books or Internet web sites on the language and technology used with the legacy code. Search knowledge bases and blogs and forums for answers to solutions, sometimes someone else figured out what you are trying to solve. If not ask on a forum or blog or web site and see who answers. Many of my answers got that way from the Internet on legacy code, but management didn't understand why I spent so much time on the Internet. It was because they wouldn't buy me the books I needed and I had no documentation or anything to work with except for pure code with no comments and all with serious problems. Sometimes I had to spend 5 hours a day researching on the Internet and 3 or 4 hours coding, but in doing so I saved months of work, but management didn't understand that each web site I went to was work related and I looked at the design of sample code even HTML code to get ideas on how to solve the legacy code problems. Sometimes you have to call up a help desk of a vendor to get answers as well, but they docked me for long distance calls to Canada where Crystal Reports and Segate/Business Objects had their headquarters. Fixing Crystal Report errors would make me spend 5 hours a day on the Internet just to figure out what caused double lines in a report and why only certain users got it and not others.

    #9 When in doubt ask for help. Sometimes another pair of eyes can spot errors and mistakes that you cannot see. Diversity is a good thing with team members. Form a dream team of programmers of different backgrounds for best results.

    #10 When in danger, when in doubt, don't run in circles and scream and shout. Take a walk, get something to drink and relax. Take a mental health break instead of getting angry at other people for not helping you or not doing their jobs properly, they might be suffering from stress like you are and you don't know it. Be positive, not negative.

    • Sometimes you have to call up a help desk of a vendor to get answers as well, but they docked me for long distance calls to Canada where Crystal Reports and Segate/Business Objects had their headquarters.

      You continued to work there after they docked you (presumably your pay) for doing your job? If you had any flexibility, you should have quit on the spot and sued them for the money they owed you.

      • Yeah I guess I could have, but it is all in the past. I forgive them for doing that to me, but they did it out of ignorance because they didn't understand what I was doing for them or how valuable it made me.

        They suffered for firing me, and could not replace me with someone who could do the job. I am writing these hints and directions and habits to help other programmers be like I once was.

        I am getting my marbles back and will become a great programmer again. Maybe they will see I didn't sue them and hire m

  • Legacy Code (Score:3, Insightful)

    by devnullkac ( 223246 ) on Monday September 29, 2008 @01:13PM (#25196045) Homepage

    Feathers' definition is 'code without tests.'

    I'll do you one better: Legacy code is anything developed under a different process than you're using now. If all you'll ever do is TDD, then Feathers' definition is fine. But if, like me, you've seen a dozen major development philosophies come and go and be refined over the years, you know that TDD will eventually be supplanted. The only thing that remains constant in the recognition of difficult maintenance is this: "We didn't plan to maintain it the way we're maintaining it now."

  • ...is anything you don't like.

    I've seen the term thrown around by VB programmers trying to make sense of COBOL or Fortran code. Or IT departments that were going 100% Windows using the term in reference to anything other than a Microsoft product.

    To be accurate, it should refer to code (or anything) developed under some other design and maintenance methodology or process than that currently used. That doesn't mean it is bad, old, or untested. In fact, it might be better that the crap you write today.

    I used t

  • Great book (Score:4, Insightful)

    by IcyHando'Death ( 239387 ) on Monday September 29, 2008 @01:38PM (#25196303)

    I don't know how many of those leaving their pessimistic comments here have actually read this book, but I have. It's actually been on my to-do list to write a book review for Slashdot myself. Long overdue, I thought, given that the book was published in 2005. Now I'm sorry I didn't get around to it, because I think this reviewer, though positive about the book, considerably undersells it.

    To those of us stuck doing active development on old, ugly code, every day can feel like we are slogging deeper and deeper into a swamp. Each time we hack in a new change, it makes us feel unclean. We are ashamed of the ugliness of the patch work we are adding to. We know programming used to be fun, but only rarely do we feel the echoes of that now. Mostly we feel dejected. And we've lost our motivation because we are not putting out code we are proud of.

    If any of that rings a bell with you then grab Michael Feathers' book the next chance you get. A previous poster said something like "get Martin Fowler's Refactoring book instead", but he's entirely wrong. Not that it isn't a great book, but it won't save you. I've known about refactoring for years without being able to put any of it into practice. The prerequisite to aggressive refactoring is a good set of automated tests, and my projects have not only had no tests, but have seemed down-right untestable.

    WELC is your map out of the swamp. And it's a map drawn by someone who has clearly spent a lot of time guiding others out. Feathers knows how tangled your code base is. He knows it doesn't have useful documentation or comments. He knows you are under time pressure but afraid to break funtionality you don't even know about. He has seen it all and he knows how discouraging and hopeless it looks. But he knows the way out, and he'll patiently and calmly
    guide you as you break your first dependency, get your first class into a test harness or write your first test case. And before you know it, you are standing on a little patch of solid ground.

    Take my advice. Get this book, read it, and put it into practice. It can change your (work) life!

  • by goose-incarnated ( 1145029 ) on Monday September 29, 2008 @02:07PM (#25196599) Journal

    "... When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything."

    Wrong.

    • Agreed. The tests will tell you if you broke anything they test in a way they test for. In any real-world application, though, you end up with untested and untestable interactions between components. Worse, you end up with errors that aren't bugs in the implementation, they're bugs in the component specification. The tests pass because the component's doing what it's supposed to do, but that causes the entire system to fail when you get to integration testing because what it's supposed to do isn't what it n

      • The tests will tell you if you broke anything they test in a way they test for.

        You're still going too far. The tests will tell you if you broke anything they test in a way they correctly test for.

        You can be confident in the knowledge that the tests do indeed test the changes you're trying to make, and discover (after much hair-pulling) that your changes now tickle an obscure bug in the tests. This may result in the tests claiming your code is broken when it's not (the good case) or that your code works properly when it doesn't (the evil case).

        Of course, we were supposed to assume th

  • by Uzik2 ( 679490 )

    "in the TDD world, tests are what make code easy to maintain. When good unit tests are in place, then code can be changed at will and the tests will tell automatically you if you broke anything."

    Isn't this a rather ambitious claim? I've seen many systems with lots of tests with bugs not caught.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...