Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News Books Media Book Reviews

Practical Software Requirements 47

Jason Bennett has returned after a long hiatus, bringing with him a review of Benjamin L. Kovitz' Practical Software Requirements. Jason's theme has been software engineering, and this review does not disappoint, drawing on themes of what you actually need to accomplish your job.
Practical Software Requirements
author Benjamin L. Kovitz
pages 426
publisher Manning
rating 9/10
reviewer Jason Bennett
ISBN 1884777597
summary A different perspective on how to gather requirements

Background

Greetings, all. I apologize for my long review layoff, but between June and now, I've managed to acquire a job where I actually have something better to do than write book reviews! Bonus! :-) Regardless, in the course of designing a new system for my company, I needed to write a good requirements document. I thought I knew how to do this, and set about creating the most anal, unreadable piece of technical gobbledygook you've ever seen. Needless to say, this document didn't fly. In desperation, I fished around Amazon.com for some decent requirements books. I already had the IEEE specification for requirements, but I didn't have a book that explained how to use the specification (this should tell you something). My search finally turned up this work, which had some good reviews posted, and the rest is history....

What's the book about?

Kovitz presents a very different view of requirements engineering from the prevailing view. Most approaches to requirements focus on breaking a problem into parts, and with these parts filling in predefined sections of a requirements template. Kovitz disagrees with this approach to requirements, and offers his own. In short, he says that decomposing a problem correctly is hard to impossible without some idea of what the solution is before you begin. Thus, when presented with a problem, you should try and find out what type of problem it is, so that you can relate the problem to ones already solved. From here, Kovitz describes how best to frame problems, and gives some sample problem frames.

Of course, once a problem has been placed in its proper frame, it still must be broken out into its constituent parts so that the requirements can be enumerated. Kovitz describes a process for doing this whereby the denizens of the problem domain are enumerated, along with their relationships. Once these denizens ("sets") are enumerated, along with their individual attributes and relationships, they make up the description of the problem domain. The requirements, then, are the effects the machine is to produce on the previously-described problem domain. Kovitz also advocates a separate interface document, which describes how the machine interacts with the problem domain. The book then ends with a series of chapters on style and structure, and a comprehensive example.

What's Good?

Overall, I found the book to be quite excellent. Kovitz tries to break down the legalistic view of requirements engineering as a top-down fill-in-the-blank exercise, and instead advocates a more flexible system whereby the design is kept separate, but each project has the process attuned to its distinct needs. This is quite a refreshing view, but one that still mandates good software engineering practices that can get any project off to an excellent start. In addition, the author has an online discussion forum where readers can ask questions, and receive direct help from the author. I found this to be an excellent resource, and the author is to be commended for such participation and dedication.

What's Bad?

I have to admit, I had a little trouble applying the ideas in the book. Specifically, I had difficulty deciding what exactly were "sets" in my problem domain, and what their attributes were. It was a simple issue of translating the concepts to the reality of my project. In the end, I don't think my requirements document quite met the standard set in PSR, but it certainly benefitted a great deal from it.

So What's In It For Me?

I haven't been able to give a good software engineering lecture in a while, so I'll jump back on my high horse. All project, especially open source ones, need the solid foundation that can only come from good requirements. If you don't know what you are going to build, you have no chance of building it. Open source projects especially need a paper trail for new participants so that they can quickly come up to speed and understand the direction that the project is headed. Artifacts such as requirements speed this process and give everyone a common framework to draw from. Kovitz's approach provides for readable, coherent documents that allow people to understand what domain they are working in, and what they are trying to accomplish.

Purchase this book at Amazon.

  • Table of Contents
  • Introduction
  • Acknowledgements
  • Author Online
  • Part I: Groundwork
    1. Problem Solving
    2. Problem Defining
    3. Two Worlds and Three Designs
    4. Problem Framing
    5. Five Problem Frames
    6. Multi-frame Problems
  • Part II: Content
    1. Software Development
    2. Two Documents
    3. Classes and Relations
    4. Sequences and Events
    5. Causation and Control
    6. Special Topics
  • Part III: Style
    1. Documentation
    2. Organization
    3. Small Details
  • Part IV: Examples
    1. Bug Log Requirements
    2. Bug Log User Interface
  • Glossary
  • Bibliography
  • Index
This discussion has been archived. No new comments can be posted.

Practical Software Requirements

Comments Filter:
  • I enjoy reading the reviews on slashdot quite a bit, but from time to time, I wonder why they are all positive. It's certainly not the case that all books that are read by slashdot readers are good.

    Sometimes newspapers and such will never write a negative review mostly because they were sent free books for review and promotion by the publisher and it's somewhat implicit that they'll get a good review. I'm pretty sure that that's not the case on slashdot, but why do all the books get good reviews? Is it the old adage of "If you don't have anything nice to say, don't say anything?"

    I'm not accusing anybody of anything. (Sometimes it's a bit weird how whenever you say anything on slashdot that could possibly be taken the wrong way, you have to qualify it with 1 dozen disclaimers) I'm just wondering about it out loud and wondering if anybody else has any comment on what seems to be the trend.

    I would personally like to see some high profile books that suck be mentioned as such, so I can avoid them.

  • I've written a review about a JavaScript book which, IMO, was substantially less than stellar. Wait'll you see the review I'm doing of this fictional book by Daniel Oran (here's a hint... if you're thinking of buying his book: DON'T!)

    I think the main thing is that there's so many good books out there that people want to highlight the ones worthy of acquisition, rather than waste thier time slogging through (and afterwards painstakingly deconstructing) a steaming pile of dead tree barely fit for the recycling bin.

    Anyways, bad reviews are on the way, rest assured. And at the end of each one I write, I'm going to append: "I blame Hemos for this book. If he didn't send it to me, I wouldn't have felt obligated to wallow in it's suckitude just to write a reasonable review.". >B)

    --
    rickf@transpect.SPAM-B-GONE.net (remove the SPAM-B-GONE bit)

  • I can't speak for /. reviews, but I write occasional reviews for another site, and I find (so far) most of my reviews are positive. Why? Simply because I have a certain amount of discretion (I'm _not_ reading the new John Grisham without the threat of smoldering bamboo splinters) over what I review. So I tend to review books that I think I might actually like.

    It bothered me when it first occurred to me, but I finally decided I'd rather recommend good books than waste time slagging bad ones.

  • I wonder if the trend is a matter of a simple sampling issue. Specifically, there are LOTs of crummy books out there - so a good strategy is to list the exceptions (fewer of them) - which means the good books. Also, if I were reading a crummy book - I'd bail, and (if ethically inclined, which I am, I'd have a hard time writing a critique because I'd be writing it based on only a partial read. 'Course, if I'm paid, perhaps reading the whole thing is part of the deal?
  • The way to develop software is to find some open source code that does most of what you need, modify it to do the rest!

    The problem with any approach that focuses on requirements, design, and style is that it doesn't help those software development lifecycle stages where software spends most of it's time: integration and maintenance.

    Another academic approach for rocket scientist wanabees! Nothing to do with practical programming.

    Chris
  • I think its a matter of selection. For instance I only buy books I think 'look' decent. Look at the chapters, look at the price, make sure its from O'Reilly ;), etc. Then I buy it. Then I really read it. Some books unfortunately get through this screening process (esp the fiction I read). Those are the books I really want to be warned about. The books that look good, but are really bad. At the same time the reveiwers here probably aren't getting many advance copies (if they then I'll have to volunteer as a reviewer). So they probably are using the same type of criteria as I do and don't buy many bad books.
    Or at least that's my theory.
    -cpd
  • The GNU ICQ-compatible Server [ncsu.edu]

    You can see their documents here [ncsu.edu]

  • Is there a GNU client version of icq too?
  • The problem with any approach that focuses on requirements, design, and style is that it doesn't help those software development lifecycle stages where software spends most of it's time: integration and maintenance.

    But the whole point of Software Engineering is to make maintenance & integration easier!! Suppose you are handed a tarball full of Mozilla or Linux kernel source code. If you have never looked at it before, you would be lost. All of the documentation that SE processes generate is designed to make this easier. Requirements allow you to see what the software was meant to do.

    Granted, a lot of OSS projects start off scratching an itch (which is a very informal requirement doc), but a lot of the newer OSS projects like (Gnome) start off with a plan and goal and the requirements give you someting to compare to, to see if you are meeting that goal.

    UML diagrams, design documents, requirements all came around *because* it is so difficult to write huge projects without structure.

    And what if there is no code ready to modify? If you have to start from scratch, you'll be doing to OSS community a huge favour by providing good documents along with your source!

    Dana

  • Hemos gave a fiction book a 4/10 a while ago... I don't remember what the book was, but he apparently didn't like it.

    In my case, there are two things that come into play. First, I'm careful of the books I buy. Second, of those books I review, I try to decide the target audience and judge the book's effectiveness and appropriateness for that audience, even if I'm not normally someone who would read it.

    Snow Crash? A best of genre book. 9.5/10.

    Programming Web Graphics with Perl and GNU Software? A unique book with lots of interesting information, but limited to a particular audience. 8/10. (Most of the O'Reilly books I've read are 8 or better.)

    I'm reading a Java 2 beginners book right now, and it won't be getting a glowing review.

    --
    QDMerge [rmci.net] 0.4!
  • Actually, you're wrong.

    A proper design spec can help you with integration and maintenance. Heck, that's what it is there for. It speeds up your understanding of an application when you have to come back after a year to fix something. It also helps with OS programs, because it makes it easier for other to contribute when they have an overview of *how* the application work, and why something is done.

  • [DISCLAIMER: I also write Slashdot reviews.]

    It's very simple. These reviews are done on a volunteer basis. Therefore, the only books reviewed are the ones the reviewer has taken the time and effort to read, and then the time and effort to review.

    That tends to be books that are, in my case, excellent. I don't have time to read poor, marginal, or even "okay" books, because I have several (allegedly) excellent books on my shelf waiting to be read.

    That said, sometimes I hit a dud. Then I review it as such. I reviewed a game programming book on Slashdot at 5/10 because it was mostly crap.

    --
  • If you think requirements are academic, then you haven't been in industry.

    I've worked at corporations that neglected requirements documents, and those that required them. The difference is night and day.

    How do you know your software is correct without documented requirements you can check against? Perhaps that's not so important with OSS, where almost anyone will find a use for the software even if it isn't what was originally intended. But when your customer, and your future paychecks, are depending on meeting a market need, better than your competitors, you can't afford to play fast and loose.

    Perhaps previous OSS projects were mostly little ones. They're not so bad to maintain without proper documentation, due to their simplicity. But many industry software systems are huge, with many features, legacy code, backwards compatibility, etc. You cannot just wade into the code and start hacking. It doesn't work like that.

    As OSS projects mature and get larger and more complex (XFree86, GNOME, etc.), they also require a more academic/industry approach. And if you check out their web sites, you'll find that they do. You'll also find that those that don't, quietly disappear without a ripple.

    --
  • Thanks Jason, for the informative review.

    I also started a new job this year, and had the pleasure of sitting down to write proper requirements documents for new software systems. This was something I did not have the opportunity to learn or practice at my previous job.

    We too found ourselves running out to our neighbourhood book store, to find good books on requirements documents. We ended up with info from McConnell books, and a requirements lexicon by Michael Jackson.

    Really, we knew the theory, but just hadn't had the practice. Now, after going through a round of requirements, revisions, etc., the process is a lot clearer. I'm a lot more comfortable with it.

    Another thing we found helpful, is to try to write testing documents based on the requirements documents, before writing the design documents.

    --
  • Actually, you're wrong.

    I realize it seems logical that spending lots of time in requirements and design should benefit the final product. And, to cover your legal a** when designing medical, military, or flight systems you better waste lots of time writing requirements and design.

    But, this is software.

    The complexity/chaos makes extra work in requirements and design (especially CASE systems) meaningless.

    That's been proven over and over again for 20 years. If you're designing "hello whirrled", then extra emphasis in requirements and design can help.

    Once you're writing something useful, extra efforts in requirements and design can actually be disastrous.

    The problem is, most programmers don't want to do any real work: like you, they want to be rocket scientists. They want to design; they don't want to integrate or maintain, so they come up with silly excuses like yours.

    It takes years after college to unlearn this crap. When a programmer comes up to me and says "I'm 90% done", I realize he's done the requirements, the design, and the coding, and he's done some debugging.

    He's got the other 90% to go (and that's just the integration; not the other 400% maintenance that he's going to try to squirm out of).

    Chris
  • by Anonymous Coward
    We're waiting for Katz to publish. :-)

  • The other comments forgot an important point. Yes, a spec can help maintainers understand the architecture one's it's been implemented, and that's very valuable, but it's more valuable to start with a clean, coherent, consistent, and adequate architecture. This is CS105/Free Brooks[1] stuff, but nobody ever seems to believe it until they learn it the hard way. God knows I didn't. Some people never learn it at all.

    IMHO most programmers have no business initiating projects. Architecture is not the same as coding. It's a lot harder, and most of us just don't find it "fun" enough to do it well. It also requires a lot of painful experience to know how far to go, when to stop, what to leave vague, what to allow for, etc. "When to stop" is crucial.


    The way to develop software is to find some open source code that does most of what you need, modify it to do the rest!

    Yeah, but most really interesting projects have to start from scratch, and not all projects will benefit from repeating old mistakes ad infinitum. Furthermore, have you ever wondered where that magical free code came from? Somebody wrote it. It's not turtles all the way down. Somewhere, at some point, somebody started with an empty text file. If the code is worth reusing, odds are good that your mysterious benefactor spent some time thinking before s/he started hammering out for( ) loops. The alternative is to spend your days grafting a heterogeneous flock of kludges and patches onto an inadequate foundation.


    Another academic approach for rocket scientist wanabees! Nothing to do with practical programming.

    I'm left wondering if you've ever done any practical programming that was of any worth. If you want a good example of a program that grew by accretion rather than by design, try Windows. "Design" may seem like "eating your vegetables", but you can't blow it off if you're doing something non-trivial. IIRC there's been at least one major shake-up in the Linux kernel tree to correct for early short-sightedness.

    Programs don't exist. They're soap bubbles made out of zeroes and ones. In hardware engineering, the blueprint describes a thing that has an independent existence. With software, the code is the blueprint, and the blueprint is the thing. Software is the Word made Flesh. It exists entirely in the Platonic realm, so there is very little meaningful distinction to be made between "theory" and "practice".


    --------------------------------------
    [1] Read The Mythical Man-Month by Fred P. Brooks. It does for software "engineering" and software project management what K'n'R does for coding per se.


    "Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
  • >But the whole point of Software Engineering is to make maintenance & integration easier!!

    Supposed to, but doesn't.

    >Suppose you are handed a tarball full of Mozilla or Linux kernel source code. If you have never looked at it before, you would be lost.

    The art of integration and maintenance requires a focus on the problem at hand, not wasting time understanding how the whole thing works.

    >UML diagrams, design documents, requirements all came around *because* it is so difficult to write huge projects without structure.

    Aiding interprogrammer communication within a project is a great goal; I'm skeptical of throwing more software at a software problem: usually you find the "cure" to be buggy enough, or irrelevant enough, to cause more time to be wasted.

    >And what if there is no code ready to modify?

    There's always a starting point available; it inflates your ego to think you're doing something unique, but don't be a prima donna!

    >If you have to start from scratch, you'll be doing to OSS community a huge favour by providing good documents along with your source!

    Comments are often irrelevant: they were placed their during initial coding, and over integration hacking will lead you astray. The code is the relevant information.

    Chris
  • I also found PSR well-written and very helpful. It described formally many practices which I had been trying to understand intuitively. It offers a very clear way of explaining what is expected of a piece of software. It also offers some guidance on elicting requirements from users. In a similar vein, I can recommend the guidelines of the Extreme Programming group (http://c2.com/cgi/wiki?ExtremeProgrammingRoadmap) .
  • >If you think requirements are academic, then you haven't been in industry.

    Programming in industry for 20 years (since college).

    >I've worked at corporations that neglected requirements documents, and those that required them. The difference is night and day.

    I used to live this delusion too. When you're old enough to look back, you'll realize that over emphasis on requirements and design cost more than they save.

    >Perhaps previous OSS projects were mostly little ones.

    It's the other way around: small projects benefit more from over design. Large projects never get finished because of too much emphasis on requirements and design.

    I've got to stop replying to all of these now; I realize I've struck a nerve in the youthful programmers who need a conversion of consciousness. I'm afraid experience is the only thing that will help you realize the delusion ;)

    Chris

  • The art of integration and maintenance requires a focus on the problem at hand, not wasting time understanding how the whole thing works.

    Adding significant features absolutely requires an "understanding of how the whole thing works". For ordinary bug-fixing, it's a big help too. Nose-to-the-code maintainers who don't give a shit about the whole system are a menace. They end up creating something incoherent, contradictory, and ultimately unmaintainable.

    Your comments might apply to a consultant who shows up for six weeks and moves on, but the number of practicing programmers doing their jobs that way is in direct proportion to the amount of crap software out there.


    Comments are often irrelevant: they were placed their during initial coding,

    That's true only if the maintainers have been grossly incompetent. On the other hand, most maintainers are exactly that . . .


    . . . and over integration hacking will lead you astray. The code is the relevant information.

    In a really complex system, learning the architecture by grovelling over the code is crazy. You can do it, but it takes months and by the time you've got a clear picture, you've generated months worth of lousy code that you'll have to rip out and rewrite.


    There's always a starting point available; it inflates your ego to think you're doing something unique, but don't be a prima donna!

    "Always"? Did you prove this inductively or deductively? For all your rhetorical devotion to "practicality", you're really quite an idealist. Anyhow, free [speech] software is generally free of time and budget constraints, so you may as well do it the fun way and start with a clean slate. Proprietary software can't use GPL'd code anyway -- though unfortunately much of it is afflicted with crap like MFC, which is the worst of both worlds.


    "Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
  • What I've typically seen happen is:

    1) Company goes along producing software

    2) Company tries to do too much and winds up with feature-packed garbage that's eons late and getting later in real time.

    3) Company gets religion about engineering standards and practices and tries following the methodology de l'heur.

    4) Next release is just as bad, partly because everyone has been fumbling over methodology.

    5) Everyone forgets the good intentions altogether and goes back to the usual chaos

    Note that this doesn't strictly have to apply only to commercial software, but open source projects typically don't have the money to go out and buy the fancy stuff. Also, there aren't many open source project leaders who could force that kind of nonsense on unwilling engineers.

    As I read the review, though, this book isn't advocating that kind of approach; it sounded more like an iterative kind of thing that in practice is the only thing that can really work.
  • Did you ever notice that it seems that very few of the readers here have made a LIVING writing code? Writing code for free is GREAT, as

    1) A Hobby
    or
    2)When someone else is paying you cost of living

    When you get out in the real world, where you get paid to write code, you'll find out a few things

    1)Code has to meet a defined need
    2)Your boss usually wants to keep that code (that he paid for) to themselves. Let's face it, that project probably took 4-5 man years, and cost him (Fully loaded costs), in the order of 1 million dollars. You don't GIVE that to you competition for free, or you go out of business, fast

  • I've worked at corporations that neglected requirements documents, and those that required them. The difference is night and day.

    I used to live this delusion too. When you're old enough to look back, you'll realize that over emphasis on requirements and design cost more than they save.


    "Over emphasis" on anything is a mistake. Magical silver bullets won't do your job for you, and this industry certainly has suffered through plague after plague of the damn things. So what? Your arguments sound like those of opponents of structured programming (I had a bitter argument with my uncle about goto's last year -- it was like those stories about Japanese troops in caves in the Pacific who didn't know the war was over!) and the more recent opponents of OOP. Well, sure, you can draw a cartoon, reduce something to absurdity, and make it look silly. So? Meanwhile, somebody else takes the time to learn and understand it, and to get a grasp of what it offers and what its limitations are. OOP and structured programming have both turned out to be very valuable tools. They don't do your job for you, no, of course not. They're good damn tools, but like any tool, you have to learn how to use them, when to use them, and when they're irrelevant, inappropriate, or inadequate. A screwdriver is a good tool for driving screws. You can "prove" that it's worthless by trying to use it to cut wood, but you haven't proven anything that's of any interest.


    I realize I've struck a nerve in the youthful programmers who need a conversion of consciousness.

    It sounds to me more like somebody growling about how "these damn college kids think they're so smart". Well, at least you didn't tell us about coding uphill in the snow both ways . . . :)

    I've done large programs your way, and my way. My way has worked a lot better in practice. Basically, all we're saying is that neither of us would hire the other.


    "Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
  • Many of them. Search freshmeat for the specifics.
  • You are arguing past each other...


    Overemphasis on any facet of implementation, including (perhaps especially) requirements and design, will lead to a flawed product. (The greatest flaw is never geting done.) Finding overemphasis of the design phase is a very common problem.


    The opposite problem, not enough emphasis is also prevalent. You can't simply wade in on any but the smallest projects.


    Overall the real problem is finding the right balance between these extremes is the key to success. Unfortunately this balance changes with every project. As cworley [slashdot.org] said on simple projects planning til the cows come home is effective. Then again, so is wading in and hacking away.


    Unfortunately there are "forces" at work making this balancing act particularly hard. Management and most consultants want the design phase to be highly detailed and documented. Management wants all this in its standard form. The consultants have their own different forms. The code jockeys want to start hacking away. These activities are the ones that directly generate income for each of these groups. There are precious few whose responsibilites cross these boundaries. As a result "failed" projects have a great deal of finger pointing. The suits say the code jockeys didn't follow "The Plan", the programmers say they coded to the available spec., but that the spec was incomplete or downright wrong.


    This leads to a vicious circle where more design procedures are put in place, and where programmers are tied more and more to operating in their little areas of the code. In essence the reaction to the failure ensures more of the activity that caused the failure!!


    You must seek balance in thought and source, only then a Jedi will you be.

  • ... is one that IMHO is not worth the time/money. I was frankly amazed that this one made it through the O'Reilly editorial process in its current form. As (I believe) someone on Amazon said, it looks like someone's PowerPoint slides on UML slapped into an O'Reilly cover.
  • > 2)Your boss usually wants to keep that code (that he paid for) to themselves. Let's face it, that project probably took 4-5 man years, and cost him (Fully loaded costs), in the order of 1 million dollars. You don't GIVE that to you competition for free, or you go out of business, fast

    So, Open Source has no business models?

    You need to convince yourself and your boss that software is a service, not a product. Even Balmer said that a few weeks ago (in his "how to eat your golden-egg-laying-goose" speech).

    Unless (or, maybe, even if) you're a large company with market share, that code you spent $1M writing will soon be lost forever. You'll have wasted your time and your bosses money.

    Name any large, currently popular, application, not written by a large company, whose code dates back over 5 years. Only Open Source lives and evolves, unless you're M$ and your customers will blindly buy whatever you sell.

    Chris
  • You are correct that there needs to be a balance, and I agree that an interative approach is the most logical one. Certainly, no one should "freeze" requirements in the middle of the project. Things change all the time. What is important is to have the requirements on paper, and a defined way to change them, so that everyone is on the same page.
  • In order to have a really good process improvement program, you have to have start with at least a cursory understanding of:

    • System theory
    • Epistemology
    • Psychology
    • Statistics and Variation

    Most organizations attempting process improvement concentrate on the last item, but the whole bag is required. I'd say the biggest mistake is to believe you know what's wrong and how to fix it, simply because you've identified that you have a problem. Some people don't take this approach, and they say, "I don't know how to fix it, so I'll read this book and we'll do that."

    Books about specific process improvements are not based on any sort of universal truth. They are merely the results of case studies, and because some cases are sometimes the same, they have effectiveness in other projects. What makes some books better than others is an approach that sneaks in practices of personal theorization and evaluation for the reader.

    But even these books don't go far enough. Think about where revolutionary software practices come from. The Cleanroom environment started as a mere postulate, and IBM let Dr. Mills exercise the practice on a medium-size project. Mills dutifully collected data demonstrating the success and failure of his method, and proposed a better version for the next project. It wasn't one big brain fart that gave birth to a complete set of practices, rather it was an exercise of science.

    Your post tells me that you can see past the all-or-nothing concept perpetuated by methodology sages. But there were a few things that worry me, specifically:

    Note that this doesn't strictly have to apply only to commercial software, but open source projects typically don't have the money to go out and buy the fancy stuff.

    Do you believe that the purchase of a $20,000 case tool would make a methodology work even if no one understands the system? I think that the OSS guys, if they formed quality circles and gave each other more process support than "RTFM", could develop immensely good processes compared to the status quo in both open and corporate software. It wouldn't require consultants and tools, and if they found that a tool would be worthwhile, it wouldn't take very long to build it. OSS is, after all, very good at writing programs for programmers. :) I think, though, much could be learned from the Squeak community where the OSS process is very collaborative and people don't work on non-trivial things without telling other people. In the Linux Community, there seems to be a big drive to be considered a wizard (or a Linus). In squeak, you'll be noticed if you just produce a constant stream of bugfixes and minor enhancements.

  • >[for one client, I] maintain several hundred thousand lines of undocumented code... working without a sound-process base is irritating
    >...
    >For my other client, we did a formal spec and design, and finished the implementation ahead of schedule and under budget.

    Two different projects in two different states of development. I bet there were (or will be) times when both perspectives applied (or will apply) to both projects.

    If the second project is lucky enough to live beyond it's initial writers employment, there will be new programmers moaning how poorly the project was built... they'll find hacks, they'll find code that's contrary to the documentation as proof. They'll cast aspersions upon your mother.

    Teamwork is important. Was this where the first project failed ("everybody quit or got fired")? If not, I bet when they initially wrote it, there were design & requirement specs and lots of documentation and research; they were probably proud of what they did and how they did it. Have you ever talked to one of the initial programmers about their design? If so, ask them to recall the design phase (not clouded by the day they were fired).

    A small amount of documentation and research is important; but my experience is: programmers want to do R delivery is not important. Analyzing the process of analysis (writing a book on requirements) draws attention away from the reality of programming which is (or should be) spent in integration and maintenance (which they don't teach, they just say is a bad thing that can be avoided if you write more requirements and perfect the design on paper).

    I've spent many years writing tools to help programmers. I've written CASE systems. I once spent six months in the requirements phase of a to-be-bloated design-by-large-commitee project (I left that company before the design phase even started).

    I think every graduate knows they can write a better language.

    Can they deliver an application?

    Chris
  • Do you believe that the purchase of a $20,000 case tool would make a methodology work even if no one understands the system?

    Of course not, and that's half the problem. My point is that open source projects typically don't even have an opportunity to go down this rathole. It's usually companies that want to change themselves and decide that they should buy a packaged solution from a guru.

    What usually seems to happen is that some aggressive manager or wannabe brings one of these things in, to prove (maybe to themselves, I don't like to believe that they're all intentionally dishonest) that they're up with the latest technology.

    Basically, my take on this (and I'm fighting the same battle where I am, where we don't use anything particularly fancy) is that the best process support programmers can give each other is to simply write everything down in a reasonably well organized fashion. I started my current job only about 3 months ago, and since then I've been making a lot of noise about the lack of process documentation and (more importantly, IMHO) writing a process document (read: web page) myself explaining how to actually build our software from scratch. My manager agrees with me, but I'm trying to work with him to come up with some way to make people understand that

    1. writing things down is every bit as important as coding, and that

    2. the amount of time wasted because knowledge isn't at hand is significant, and is probably greater than the amount of time saved by not writing it all down.

    Quality circles are one of the few recent fads that I actually like, because they're simple, they're based on accumulating knowledge about best practices (or rather, "what works"), and they work off of recognition. Actually, of course, they're not all that new.

    I'd rather not worry about "really good" process improvement to start out with; it's better to simply take a deep breath and understand what's actually going on, and do a few simple things that people can understand that have a recognizable early payoff.

    I've decided over the years that engineering is 90% common sense applied to 10% specialized knowledge. The civil and mechanical and chemical engineers have understood this all along. You don't go build a chemical plant and add all sorts of fancy wheezes and gizmos and then skimp on the operating manual, because you'll have an explosion on your hands very quickly. But software's easy and cheap to change (you can guess what my real thoughts on that matter are, both as to veracity and consequences), and if it fails maybe the machine just reboots. We also studied "computer science", so we think that our work can be reduced to formula and applied as though it's the first law of thermodynamics. What we really are is a bunch of overgrown high school hot shots who think that because we can write code that makes computers do really neat things that we're somehow better than everyone else and that basic principles of system design don't apply to us (either we can be fast and loose, or we can follow a magic cookbook and everything will turn out right in the end). Well, the old timers can teach still teach us a few lessons that they learned on the knee of their ancestors.

  • >> The problem is, most programmers don't want to do any real work: like you, they want to be rocket scientists. They want
    to design; they don't want to integrate or maintain, so they come up with silly excuses like yours.

    Funny. last time I checked, most programmers don't want to write documentation, design specs, etc etc. hell, I hate it myself.

    Most programmers want to write code, no design, no nothing behind it. Just look at the majority of OS programs out there. That's what programmers do in their free time, without structure.

    I hate doing design, but I have also inherited projects where there were no specs at all, and I had to do the debugging and maintenance. Guess what? Very painful process, and 75% of the time was trying to figure out why something was done.

    And design specs and other docs are not static. They change with the project. They should be-up-to date, but they have to exist.

    And no, there are no point it writing a design spec for a program that has 2 functions call and all it basically does is "Hello world".

  • Hey, funny coward; Katz is published. See Running to the Mountain [amazon.com], Media Rants [amazon.com] or Virtuous Reality [amazon.com].

    And coming in February, Geeks : How Two Lost Boys Rode the Internet Out of Idaho [amazon.com].

    Apologies to the /. crew that the links above do not include the secret code that would generate a kickback for them.

    Bravery, Kindness, Clarity, Honesty, Compassion, Generosity

  • In short, we can treat integration and maintenance as requirements.


    Indeed. That's probably the best way I've seen this expressed.

    Like anything, it's possible to analyze a particular project and decide that integration with something else, or maintenance, are not important. But if people at least start with the viewpoint that these requirements are just as valid as any other functionality/performance requirements, there's some hope that they'll treat these issues with the respect that they deserve and make rational decisions.

  • Chris,
    This works for commercial software, but most software out there isn't commercial software. MOST software is things like an application to process an insurance claim, and tell the adjuster how likely a jury is to make an award. Or to tell you what is going to be on the evening news 15 seconds from now, and how long that segment will be.

    Yep, it's a service, to ONE company, that has the companies business rules built in. You never sell the thing (it may be charged back)

    When I say that the software goes no where, I'm not only talking about the source, but even the EXEs.
  • Let me begin by applauding your efforts in controlling the process at your company. I have worked at a number of places that did not believe that there was any engineering in software, and so I know that it can be hard. Let me give you one important tip: never make anyone feel that you think you are better than them. If you do so, it makes it hard to encourage their own improvement.

    It's important to be careful with mixing means and ends. Process documentation is one small step toward quality. If it is treated as a solution, the end will be a process document. I think you're right on in saying that engineering is only 10% specialized knowledge. But by calling the rest common sense you could encourage yourself to ignore mistakes in your reasoning. Common sense is non-analytical. It is experience-driven. A bad process is usually founded in common sense, and it is common sense that has made people believe that testing means quality, documentation is expensive, etc.

    Edwards Deming said that the problem is caused by bad experiences with improvement ideas that cause the same mistakes to be repeated ad infinitum. With a more scientific study of the workings of process improvements, it is possible to avoid the mechanisms that contribute to the vicious cycle of mediocre processes.

    The other 90% of the knowledge that an engineer needs is theoretical, and the engineer has to create that knowledge for himself or herself. At any point in a process, the engineer should be able to make a theory about improving that point. The priority of testing these theories is based on critical-path studies like Gantt charts, cause-and-effect diagrams, and pareto rankings.

    In software, the SEI has designated configuration management as one of the first things that a company will have to work on. This view is aggregate, not company-specific. It may turn out that, for your company, requirements management is the most important concern. It is the responsibility of the company to make this ranking.

    Documentation is a good place to start, but you should encourage your manager to dedicate space in the budget for it. After a while, the money spent on improvements early in the process can be recovered from cuts later on in the process. Those cuts, of course, should be reflected as more improvement. If you are working feverishly on documentation in extra hours, this would be a signal to a good boss that your time spent elsewhere should be redistributed. Unfortunately, to a bad manager it seems like a good reason to call documentation a failure.

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...