Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

A Unified Theory of Software Evolution 232

jso888 writes "Salon has a nice article today on Meir Lehman's work on how software evolves and is developed. Lehman's investigation of the IBM OS/360 development process became the foundation for Brooks' Law: "Adding manpower to a late software project makes it later." He is hopeful that his work will make software development less of an art and more of an engineering science."
This discussion has been archived. No new comments can be posted.

A Unified Theory of Software Evolution

Comments Filter:
  • by Mr. Neutron ( 3115 ) on Monday April 08, 2002 @09:43AM (#3302647) Homepage Journal
    Software doesn't evolve by chance, folks, it is DESIGNED by its CREATORS.

    Please check your crackpot theories and psuedo-science at the door. /. is a site for SERIOUS INTELLECTUAL DISCUSSION.

    Thank you.
    • by Anonymous Coward
      from www.dictionary.com

      evolution
      n.
      A gradual process in which something changes into a different and usually more complex or better form. See Synonyms at development.

      a. The process of developing.

      b. Gradual development.

      Seems like quite an appropriate term for 90% of the software projects I have ever worked on.
    • /. is a site for SERIOUS INTELLECTUAL DISCUSSION

      I don't know whether to laugh or cry.
    • Oh I don't know - when I worked in software testing it certainly seemed like the developers were just making random variations in the code base and we testers would weed out the broken ones and every once in a great while some random variation would be an actual improvement so we'd go with that, then start with the random variations on that code base.

      • it certainly seemed like the developers were just making random variations

        This is called error seeding and is used to evaluate the performance of testing systems (including the people doing it).

        On the other hand, I dimmly remember something about not to do it with production code....
    • Two thoughts:

      1) Software doesn't evolve by chance, folks, it is DESIGNED by its CREATORS. Please check your crackpot theories and psuedo-science at the door. /. is a site for SERIOUS INTELLECTUAL DISCUSSION.

      So I take it you are in favor of creationism?

      Sorry, but the parallels were so obvious ... ;)

      2) For extra ammo, Lehman also has expanded the graphs and data from his original studies in the 1970s. Taken together, they show most large software programs growing at an inverse square rate -- think of your typical Moore's Law growth curve rotated 180 degrees -- before succumbing to over-complexity.

      I am not holding my breath waiting for Microsoft to keel over into a monstrous pile of cyberwreckage any time soon.

    • Software doesn't evolve by chance, folks, it is DESIGNED by its CREATORS

      I'm not sure whether this was supposed to be funny or whether other readers are interpreting it as funny. There have also been a few stabs at the parallels with life (evolution vs creation, etc). Hrm...whatever. From the article: "The gap between biological evolution and artificial systems evolution is just too enormous to expect to link the two,"

      In all seriousness, I've seen so many project managers use evolution (not the theory explained in the article, however) as some sort of methodology for their projects and I have not seen any of those projects truely succeed. The idea that you throw something, anything, out there, find out what's good/bad with it, then re-iterate the design and development based on findings is such a random and expensive process. I've seen so many programmers put in half-assed functionality, especially on front-end code, just so that they'll let testers and "usability" experts fix the problem and fix it in the next release. This is like throwing a chunk of randomly chipped wood out and hoping that others can tell you how to sand it down to something usable.

      Cooper makes this analogy in "The Inmates are Running the Asylum" (link here [amazon.com]) and bashes project teams that take on this sort of process of evolution. He poses a process of almost completely up-front design by building to a theoretical user persona and culling out complexity by ditching features that will never be used by this persona.

      Now Cooper's views don't necessarily contradict Lehman's (at least from what I've seen in the article). In fact at a glance they seem to blend in nicely.

      From the article, again: Figure out how to control the various feedback loops -- i.e. market demand, internal debugging and individual developer whim -- and you can stave off crippling over-complexity for longer periods of time

      It's clear that he means that we, as programmers, should be willing to throw away a shitload of code. I agree with this. I think there's a huge belief in re-use (I tend to it myself) among programmers for both practical and personal (pride...having spent weeks on certain code) reasons. But there are so many cases where the re-use of a small feature among others in bloated code can really complicate and bog down the overall code-base, or where the functionality of certain re-used code doesn't really fit, but so much investment has been made that it might as well be re-used.

      Developers really do need to listen to the feedback provided by the marketplace and other forces. I'm not certain if the unified theory is so unified, but it's a valid perspective and blends in with other published sentiments on software development methodology.

      'nuff rambling...

  • Brooks' Law (Score:2, Interesting)

    by TheToon ( 210229 )
    Brooks' Law: "Adding manpower to a late software project makes it later."

    This can be simplified: "Adding manpower to a software project makes it later."


    There's rarely that many programmers needed for a given task anyway. You need a project leader and lots of monkeys to test it... very few projects should have more than 10 programmers (if any).
    • by flynt ( 248848 ) on Monday April 08, 2002 @09:49AM (#3302665)
      very few projects should have more than 10 programmers (if any).

      You realize you just suggested that very few software projects should have any programmers. How is the project going to get completed without anyone working on it?
      • by alanwj ( 242317 )
        You realize you just suggested that very few software projects should have any programmers. How is the project going to get completed without anyone working on it?

        My boss seems to think that having a lot of meetings about it will do the trick.

        • And the meetings will continue until he discovers why no work is getting done!

          There's a guy I work with, and that should be his .sig. He rarely shows up for a meeting without invoking this humorous tagline. :-P
        • My boss seems to think that having a lot of meetings about it will do the trick.

          At some point you will have meetings to discuss the meetings.

          At a later point you will do nothing else than having meetings, but your project will still be further and further delayed....
      • Uhm.... yes... and I even did a preview.

        Maybe there is a deeper truth in it. Maybe many software projects would be better off without programmers? :)
      • Hey, I can think of lots of software projects that shouldn't have any programmers...

        But seriously, one interesting implication is that forks/parallel projects can be a Good Thing! If you have two 10-person projects, and throw one away, you're better off than with a 20-person project.

        (Linux filesystems seem to have taken this to heart... ;)

      • >How is the project going to get completed without anyone working on it?

        Read my lips: E-VO-LU-TION

        Example:
        Start with "printf("Hello World\n");" and leave it in a warm, wet place for a few months, feed it with some .po, and .in files, and see what you get : GNU/Hello !

        I have a strong belief that's what they did with Mozilla :)
    • Re:Brooks' Law (Score:2, Insightful)

      by jeffy124 ( 453342 )
      your simplification is somewhat faulty. If you have 3 people developing a large complex system, at some point it becomes obvious they'll need more than three in order to make the deadline, even if that deadline is years away, and especially if all three lack serious knowledge in a particular topic, such as databases or UI design. Think of what slashcode might be like if cmdrtaco were still the only guy working on it.
    • Re:Brooks' Law (Score:3, Insightful)

      by ACK!! ( 10229 )
      Someone else called this oversimplification. It all depends on the project.

      The number of programmers needed on a project depends upon the number of software modules in said project. Each programmer working on their module and with the other programmers and project managers for the sake of integration and communication between modules talking to each other. I am not a project manager so I do not have the magic formula but there needs to be some serious research in the IT industry of how many programmers are needed per project for the number of independent sections or modules of software being created.

      Then and only then will you have a situation of better utilized output over a large group of programming talent.

      10 may me too few programmers for some huge program and too many for plenty of other projects.

      ________________________________________________ __
      • Right. And along the way it'd be nice if they figure out how to know how many "modules" a project will require, since currently that's as impossible to accurately predict as is the manpower requirement.

        As long as there's a product management group who can drop "Oh by the way" new requirements on the product a week after code freeze, there'll always be a problem.
        • Very good point. As long as you have new modules being popped in without consideration to the resources needed then you are going to have problems.

          The funny thing is that I have some two other problems arise from Brooke's law.

          1) Not wanted to justify more resources to a project the managers simply roll in features into existing modules that should really be seperated out. The same number of programmers end up getting hit twice as hard.

          2) Managers have actually quoted this law to others as a excuse for not assigned more resources to a project. Many times in smaller software houses the problem you have is getting the programmer resources for a project. Only the larger houses run into the problem of justing throwing people at a problem to make their deadlines and not having that work the way they wanted it to.

          ________________________________________________ __
    • very few projects should have more than 10 programmers (if any)

      I have to disagree there. A project without any programmers will certainly face some dificulties in the end ;-)

      But 10 is a good upper bound. That is still enough that they can know each other talk to each other in case of questions.
  • by blirp ( 147278 ) on Monday April 08, 2002 @09:50AM (#3302666)
    While "Brook's law" might be a law, it's only useful in retrospect. Most software projects have no idea how far behind they really are. So basically, you can always add manpower, you're really only half way through anyways...
    • by JamesOfTheDesert ( 188356 ) on Monday April 08, 2002 @12:55PM (#3303727) Journal
      So basically, you can always add manpower, you're really only half way through anyways...

      True. I used to file status reports using Zeno's Work Estimation. On each report I just halfed the percentage of remaining work.

      • While "Brook's law" might be a law, it's only useful in retrospect. Most software projects have no idea how far behind they really are. So basically, you can always add manpower, you're really only half way through anyways...

      I don't understand your point. If Brook's Law is valid, then it's useful not only in retrospect, but at any time, I think.

      While it may be true that most software projects have no idea how far behind they really are, that has little to do with Brook's Law. Brooks's Law doesn't say "adding manpower to a project that might be late will only make it late", it says "adding manpower to a project that's already late will only make it later". Certainly, most software projects know if they are already late, they may not know by how much, but if Brooks's law holds (in retrospect or not), do they really want to be even later?

      Now if you are referring to the fact that projects sometimes miss milestones and are predicted to be late, when in fact they won't be, what's the point of adding manpower in this situation?

      It seems to me that Brooks's Law at all phases of a projects lifetime. It even holds at the project start. If the project is already late before it's started, it will certainly get later if you staff it. :-) Better to start a different project with a more realistic deadline, no?

      Having said this, I understand that some people have done some excellent work in how you can avoid Brooks's Law to some extent. This work qualified exactly how to add manpower to a late project to help in various ways. I don't have the reference handy, though.

  • Mythical Man Month???
  • by Anonymous Coward
    Where's the guy with the .sig "it takes nine months to bear a child, no matter how many women you assign to the task" when you need him?!?!?!?!?

  • First it's : "You'll have to forgive me," apologizes Lehman at one point, sifting through a pile of research papers on a nearby shelf. "Since I lost my secretary, I can't seem to find anything." . And in the last sentence of the article it says: ...a man in search of both fulfillment and a little revenge"
    Poor, poor man; he'll never find it I'm afraid!
  • the site is slashdoted already... does anyone have a mirror?
    • i found it! :)) (Score:1, Redundant)

      by fabiolrs ( 536338 )
      i refreshed it and it appeared, if anyone finds any problem this is the content:

      April 8, 2002 | The office of Meir "Manny" Lehman is a cozy one. Located on the outer edge of the Imperial College of Technology campus in South Kensington, London, it offers room for all the basic amenities: a desk, two chairs, a Macintosh G4 and a telephone. Still, for a computer scientist nearing the end of a circuitous 50-year career, the coziness can be a bit confining.

      "You'll have to forgive me," apologizes Lehman at one point, sifting through a pile of research papers on a nearby shelf. "Since I lost my secretary, I can't seem to find anything."

      The pile, a collection of recently published papers investigating the topic of software evolution, a topic Lehman helped inaugurate back in the 1970s, is something of a taunting tribute. Written by professional colleagues at other universities, each paper cites Lehman's original 1969 IBM report documenting the evolutionary characteristics of the mainframe operating system, OS/360, or his later 1985 book "Program Evolution: Processes of Software Change," which expands the study to other programs. While the pile's growing size offers proof that Lehman and his ideas are finally catching on, it also documents the growing number of researchers with whom Lehman, a man with dwindling office space and even less in the way of support, must now compete.

      "And to think," says Lehman, letting out a dry laugh. "When I first wrote about this topic, nobody took a blind bit of notice."

      Software evolution, i.e. the process by which programs change shape, adapt to the marketplace and inherit characteristics from preexisting programs, has become a subject of serious academic study in recent years. Partial thanks for this goes to Lehman and other pioneering researchers. Major thanks, however, goes to the increasing strategic value of software itself. As large-scale programs such as Windows and Solaris expand well into the range of 30 to 50 million lines of code, successful project managers have learned to devote as much time to combing the tangles out of legacy code as to adding new code. Simply put, in a decade that saw the average PC microchip performance increase a hundredfold, software's inability to scale at even linear rates has gone from dirty little secret to industry-wide embarrassment.

      "Software has not followed a curve like Moore's Law," says University of Michigan computer scientist John Holland, noting the struggles of most large-scale software programs during a 2000 conference on the future of technology. "In order to make progress here it is not simply a matter of brute force. It is a matter of getting some kind of relevant theory that tells us where to look."

      For Lehman, the place to look is within the software development process itself, a system Lehman views as feedback-driven and biased toward increasing complexity. Figure out how to control the various feedback loops -- i.e. market demand, internal debugging and individual developer whim -- and you can stave off crippling over-complexity for longer periods of time. What's more, you might even get a sense of the underlying dynamics driving the system.

      Lehman dates his first research on the topic of software evolution back to 1968. That was the year Lehman, then working as a researcher at IBM's Yorktown Heights facility, received an assignment to investigate IBM's internal software development process. Managers at rival Bell Labs had been crowing about per-developer productivity, and IBM managers, feeling competitive, wanted proof that IBM developers were generating just as many lines of code per man-year as their AT&T counterparts.

      Lehman looked at the development of OS/360, IBM's flagship operating system at the time. Although the performance audit showed that IBM researchers were churning out code at a steady rate, Lehman found the level of debugging activity per individual software module to be decreasing at an equal rate; in other words, programmers were spending less and less time fixing problems in the code. Unless IBM programmers had suddenly figured out a way to write error-free code -- an unlikely assumption -- Lehman made a dire prediction: OS/360 was heading over a cliff. IBM, in stressing growth over source-code maintenance, would soon be in need of a successor operating system.

      Although IBM executives largely ignored the report, Lehman's prediction was soon borne out. By 1971, developers had encountered complexity problems while attempting to install virtual memory into the operating system, problems which eventually forced the company to split the OS/360 code base into two, more easily manageable offshoots. The linear growth curve that seemed so steady in the 1960s suddenly looked like the trail of a test missile spiraling earthward.

      Lehman's report would eventually earn a small measure of fame when University of North Carolina professor and former OS/360 project manager Frederick P. Brooks excoriated the IBM approach to software management in his 1975 book "The Mythical Man Month." Using Lehman's observations as a foundation for his own "Brooks Law" tenet -- "adding manpower to a late software project makes it later" -- Brooks argued that all software programs are ultimately doomed to succumb to their own internal inertia.

      "Less and less effort is spent on fixing original design flaws; more and more is spent on fixing flaws introduced by earlier fixes," wrote Brooks. "As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress."

      By 1975, Lehman, with the help of fellow researcher Laszlo Belady, was well on the way to formulating his own set of laws. A quarter century after their creation, the laws read like a mixture of old developer wisdom and common textbook physics. Take, for example, Lehman's "Second Law" of software evolution, a software reworking of the Second Law of Thermodynamics.

      "The entropy of a system increases with time unless specific work is executed to maintain or reduce it."

      Such statements put Lehman, who would leave IBM to take a professorship at Imperial College, into uncharted waters as a computer scientist. Halfway between the formalists, old-line academics who saw all programs as mathematical proofs in disguise, and the realists, professional programmers who saw software as a form of intellectual duct tape, Lehman would spend the '70s and '80s arguing for a hybrid point of view: Software development can be predictable if researchers were willing to approach it at a systems level.

      "As I like to say, software evolution is the fruit fly of artificial systems evolution," Lehman says. "The things we learn here we can reapply to other studies: weapon systems evolution, growth of cities, that sort of thing."

      That Lehman conspicuously leaves out biological systems is just one reason why his profile has slipped over the last decade. At a time when lay authors and fellow researchers feel comfortable invoking the name of Charles Darwin when discussing software technology, Lehman holds back. "The gap between biological evolution and artificial systems evolution is just too enormous to expect to link the two," he says.

      Nevertheless, Lehman aspires to the same level of intellectual impact. While he was in retirement during the early 1990s, his early ideas jelled into one big idea: What if somebody were to formulate a central theory of software evolution akin to Darwin's theory of natural selection? In 1993, Lehman took an emeritus position at Imperial College and began work on the FEAST Hypothesis. Short for Feedback, Evolution and Software Technology, FEAST fine-tunes the definition of evolvable software programs, differentiating between "S-type" and "E-type": S-type or specification-based programs and algorithms being built to handle an immutable task, and "E-type" programs being built to handle evolving tasks. Focusing his theory on the larger realm of E-type programs, Lehman has since expanded his original three software laws to eight.

      Included within the new set of laws are the Law of Continuing Growth ("The functional capability of E-type systems must be continually increased to maintain user satisfaction over the system lifetime") and the Law of Declining Quality ("The quality of E-type systems will appear to be declining unless they are rigorously adapted, as required, to take into account changes in the operational environment"). For added measure, Lehman has also thrown in the Principle of Software Uncertainty, which states, "The real world outcome of any E-type software execution is inherently uncertain with the precise area of uncertainty also unknowable."

      While the new statements still read like glossed-over truisms, Lehman says the goal is to get the universal ideas on paper in the hopes that they might lead researchers to a deeper truth. After all, saying "objects fall down instead of up" was a truism until Sir Isaac Newton explained why.

      "Whenever I talk, people start off with blank faces," Lehman admits. "They say, 'But you haven't told us anything we didn't already know.' To that I say, there's nothing to be ashamed of in coming up with the obvious, especially when nobody else is coming up with it."

      For extra ammo, Lehman also has expanded the graphs and data from his original studies in the 1970s. Taken together, they show most large software programs growing at an inverse square rate -- think of your typical Moore's Law growth curve rotated 180 degrees -- before succumbing to over-complexity.

      Whether the curves serve as anything more than a conversation-starter is still up for debate. Chris Landauer, a computer scientist at the Aerospace Corporation and a fellow guest speaker with Lehman at a February conference on software evolution at the University of Hertfordshire, was impressed by the Lehman pitch.

      "He has real data from real projects, and they show real phenomena," Landauer says. "I've seen other sets of numbers, but these guys have something that might actually work."

      At the same time, however, Landauer wonders if the explanation for similar growth trajectories across different systems isn't "sociological." In other words, do programmers, by nature, prefer to add new code rather than substitute or repair existing code? Landauer also worries about whether the use of any statistic in an environment as creative as software development leads to automatic red herrings. "I mean, how long does it take a person to come up with a good idea?" Landauer asks. "The answer is we just don't know."

      Michael Godfrey, a University of Waterloo scientist, is equally hesitant but still finds the Lehman approach useful. In 2000, Godfrey and a fellow Waterloo researcher, Qiang Tu, released a study showing that several open-source software programs, including the Linux kernel and fetchmail, were growing at geometric rates, breaking the inverse squared barrier constraining most traditionally built programs. Although the discovery validated arguments within the software development community that large system development is best handled in an open-source manner, Godfrey says he is currently looking for ways to refine the quantitative approach to make it more meaningful.

      "It's as if you're trying to talk about the architecture of a building by talking about the number of screws and two-by-fours used to build it," he says. "We don't have any idea of what measurement means in terms of software."

      Godfrey cites the work of another Waterloo colleague, Rick Holt, as promising. Holt has come up with a browser tool for studying the degree of variation and relationship between separate offshoots of the original body of source code. Dubbed Beagle, the tool is named after the ship upon which Charles Darwin served as a naturalist from 1831 to 1836.

      Like Landauer, Godfrey expresses concern that a full theory of software evolution might be too "fuzzy" for most engineering-minded programmers. Still, he credits Lehman for opening the software field to newer, more intriguing lines of inquiry. "It's the gestalt 'Aha' of his work that I find more interesting than the numbers," Godfrey says.

      For Lehman, the lack of a scientific foundation to the software-engineering field is all the more reason to keep digging. Fellow researchers can quibble over the value of judging software in terms of total lines of code, but until they come up with better metrics or better theories to explain the data, software engineering will always be one down in the funding and credibility department. A former department head, Lehman recalls the budgetary battles and still chafes over the slights incurred. Now, as he sits in a cramped office, trying to recruit new corporate benefactors and a new research staff, he must deal once again with those who label software development a modern day form of alchemy -- i.e. all experiment but no predictable result.

      "In software engineering there is no theory," says Lehman, echoing Holland. "It's all arm flapping and intuition. I believe that a theory of software evolution could eventually translate into a theory of software engineering. Either that or it will come very close. It will lay the foundation for a wider theory of software evolution."

      When that day comes, Lehman says, software engineers will finally be able to muscle aside their civil, mechanical and electrical engineering counterparts and take a place at the grown-ups' table. As for getting bigger offices, well, he sees that as a function of showing the large-scale corporations that fund university research how to better control software feedback cycles so their programs stay healthier longer. Until then, the search for a theory has rendered Lehman less of a Darwin and more of an Ahab -- a man in search of both fulfillment and a little revenge.
  • by software_non_olet ( 567318 ) <software@non.olet.de> on Monday April 08, 2002 @09:58AM (#3302705)

    "When I first wrote about this topic, nobody took a blind bit of notice."

    No, sir, I did and many collegues who were also interested in good timely work. We lent your books to each other with the notion "that's something you should read".

    Great to hear that you are still alife and enjoying to give programmers and their managers something to look at and something worth to read and think about.

    Youngsters, better pay respect to this old software camel with the hole in the sole of his shoe (and probably also in his all-too British pullover), or I DDOS your toilet!

  • by gelfling ( 6534 ) on Monday April 08, 2002 @09:58AM (#3302706) Homepage Journal
    "Unless IBM programmers had suddenly figured out a way to write error-free code -- an unlikely assumption -- Lehman made a dire prediction: OS/360 was heading over a cliff. IBM, in stressing growth over source-code maintenance, would soon be in need of a successor operating system."

    Which means that commerical systems don't so much evolve as stub their growth paths out and switch direction or spawn new generations because embedded complexity has killed off the feasibility of maintaining it. In other words, all new releases are the cause of and ultimately an attempt to escape from, the chimera that is overly complex code. In commercial terms this should be astounding. We're paying to gronk up our own because we erroneously believe the NEXT version will be something radically new and elegant which of course it can't be.

    New Version "x+1.y" is simply an ejection seat.
    • I think you just explained what happened to Win3.1, Win 95 and win ME. It was bound to happen considering the tragic code building the foundation.
    • by Anonymous Coward
      I worked on a system running OS/360, up to Release 23 or so, when IBM 'retired' it.

      We installed new Releases about once every 6 months. IBM also had 'patches' available for about 19,000 known bugs.

      These patches were not incorporated into the latest release because each of them, if installed, broke some other aspect of the OS.

      We, and every other site, only installed those patches needed to work around problems that the particular site encountered. And you always hoped that today's patch would not break something else that your users needed.

      • These patches were not incorporated into the latest release because each of them, if installed, broke some other aspect of the OS. We, and every other site, only installed those patches needed to work around problems that the particular site encountered. And you always hoped that today's patch would not break something else that your users needed.

        hmm, that sounds suspiciously like the Linux 2.4 "stable" kernel..
    • Take, for example, Lehman's "Second Law" of software evolution, a software reworking of the Second Law of Thermodynamics.

      "The entropy of a system increases with time unless specific work is executed to maintain or reduce it."

      As evidenced by the back of my Subaru.
  • Blame it on C++ (Score:3, Insightful)

    by Caractacus Potts ( 74726 ) on Monday April 08, 2002 @10:01AM (#3302719)

    I'm not attempting to flamebait here, just submitting an observation. It seems to me that many of the complexity issues can be overcome by designing better languages. I've never stopped scratching my head over the perseverance of old languages like C++ and FORTRAN. Sure, they are extremely useful in the hands of experienced folks, but they need to die. They were good solutions to problems decades ago, but so much has been learned since then and the constraints of sparse computer resources and CPU speed have moved a lot.

    • Re:Blame it on C++ (Score:3, Insightful)

      by Anonymous Coward
      New programming languages wont fix this perticular problem. Even a higher language has some very disturbing side effects since they consists of smaller sub-parts (assembler or whatever). A high language can introduce very annoying bugs that are much harder to track since their origin is hidden from the coder. And besides, even a high level language will eventually be bloatet and full of old not yet removed code.
    • C++ isn't that old. C, yes, but C++ is one of the newer languages.
      • Depends on what you'd call old... C++ has been around (in various incarnations) since 1980 or 1983, depending on what you want to look at. I'm looking at Stroustrup now:


        Earlier versions of the language, collectively known as "C with Classes" have been in use since 1980. ... The first use of C++ outside a research organization started in July 1983.

        The name C++ was coined by Rick Mascitti in the summer of 1983.

        Sometime during 1987, it became clear that formal standardization of C++ was inevitable and that we needed to start preparing the ground for a standardization effort. ... An initial draft standard for public review was produced in April 1995. A formally approved international C++ standard is expected in 1998.


        So it is accurate to say that C++ has only been standardized recently. But unless you're comparing C++ to Fortran/Simula/Algol, it is just wrong to call it "new".
    • I've seen hideous code written in C++ and equally hideous code written in Java. And doing system level stuff in Java feels awkward and fiddly. And that's not even getting into the fact that there are still (a lot of) asm programmers out there and those guys can produce tremendous speed improvements by hand optimizing the slow bits of the program in asm. Compare the speed of gogo versus lame, as one example.

      There's a lot of piss poor code out there because there are a lot of piss poor programmers out there -- people who should not be in this industry, people who took a couple of classes in VB and think that qualifies them for the title of "Programmer." And they can still bullshit their way past hiring managers with their shiny buzzwords.

    • Re:Blame it on C++ (Score:3, Interesting)

      by renehollan ( 138013 )
      C++ certainly gives one many ways to shoot one's self in the foot. But, with power comes responsibility. Some are up to the task, and others aren't. Attempts to simplify the language just shift the problem elsewhere: Java, lacking a proper pointer type in an attempt to ease memory management burdens, foists automatic garbage collection on one.

      Now, this wouldn't be bad, if the skilled programmer had, at his disposal, the means to tweak the garbage collector implementation to suit a particular application -- presuming that there is one and and only one universally "best" garbage collector is arrogant and short-sighted. The trouble is, even though it may be possible to replace the Java garbage collector, one can't do it with a Java implementation: the language is not closed with regard to it's run-time requirements -- garbage collectors need to manage raw memory via, ta da, pointers! This lack of closure, preventing a language's run-time library from being expressed in the language itself, is most inelegant.

      Of course, the C and C++ affecionados will point to this closure as the very beauty of their preferred language. Let's call such languages "complete". Alas, the linguistic power necessary to make a language complete has now been put into the hands of the neophyte programmer (was that delete or delete[], and when does it matter?).

      It doesn't take much inspiration to see that subsets of a complete language, while not complete themselves, may still be powerful enough to write useful programs. With abstractions, disciplined programers try to fake this: the C++ "smart pointer" exercise is classic. Unfortunately, for all the effort put into smart-pointers and per-class address-of operator definitions, you can still get a real pointer to an object which does not implement such a monadic operator. What you really want is the compiler to say, "Bad programmer: using a real pointer!" either as a warning or as a fatal error (well, maybe not so harshly, but you get the idea).

      • You are correct that there are certain operations that are outside (deliberately!) of the Java program model. This is a good thing.


        access to machine registers and memory

        architecture specific machine instructions

        transfer of execution to an arbitrary address

        coerce object refs to addresses and back

        invoke OS services


        This doesn't mean that you can't write GC in Java! IBM implemented a JVM and GC system entirely in Java, called Jalapeno. To do this, they created a Java class called "Magic" that had empty methods for these services which any Java compiler could build. Then, the internal Jalapeno VM compiler would recognize calls to the Magic class, verify that what they are compiling is a valid part of the JVM and inline appropriate machine code where these calls occur.


        Now, all GC systems can be written in reference to this Magic class and porting the VM is simply a matter of generating appropriate machine code for these half-dozen methods. And you get all the security of Java's automatic memory management model!


        Check the ACM's OOPSLA Conference Proceedings, 1999, Implementing Jalapeno in Java or www.research.ibm.com/jalapeno [ibm.com] for the paper.

        • Then, the internal Jalapeno VM compiler would recognize calls to the Magic class, verify that what they are compiling is a valid part of the JVM and inline appropriate machine code where these calls occur.

          At first glance, this looks like cheating: the Java GC requires VM support. Is the VM written in Java? If not, the language is not complete, as I've defined complete.

          • Ah! Now I see, they made Java complete by redefining the language (the Magic class, as you describe it, is now "special"), and building an appropriate compiler.

            This still strikes me as cheating: they changed the language to make it complete. Furthermore, the "complete" language is available only when compiling the JVM, and not when a general "I know what I'm doing" flag is set (though that's probably trivial to change). Finally, effecting language completeness via reserved words instead of symbols which are syntactically "more sugary" strikes me as clumsy, though I've not looked at their GC implementations using this technique.

            There are two problems here: language completeness, and restriction of complete languages to particular subsets. IBM appears to have clumsy solutions to both issues w.r.t. Java, with the latter easier to clean up. I doubt that there would be an elegant solution to the original problem of language completeness vis a vis Java that they face, so I can't be too critical of their "Magic" class hack.

            • Well, Java isn't SUPPOSED to be complete by your definition. The point to not having an "I know what I'm doing" flag for unsafe operation is that a) it's not really necessary, except for implementing a tiny tiny bit of the way-down-low-guts of the runtime and b) it makes security a lot simpler.

              Hostile code should not have the option of saying that "it's OK, I know what I'm doing." You could use multi-layer zones like MSFT did with .NET, but then you're undermining the appeal of the system - that everything is safe and you don't really have to trust anybody to run their code. It also prevents trojans from riding along in "trustworthy" code or just stupid things like unintentional bad pointer arithmetic or array bounds checking in non-hostile code. (I know, I know - JNI, but "Pure" Java programs are safe)

              • Well, Java isn't SUPPOSED to be complete by your definition.

                Right. And that makes it inelegant as a general purpose programming language. As an "easy to use" language geared toward virtual machine interpretation in various "safe" (in the sandbox sense) environments, it's fine, but it's lack of completeness means that VM implementations (or the compilers that compile them) have to be written in a different language.

                The point to not having an "I know what I'm doing" flag for unsafe operation is that a) it's not really necessary, except for implementing a tiny tiny bit of the way-down-low-guts of the runtime and b) it makes security a lot simpler.

                Perhaps, but security and programmers' safety nets should not be provided by making a language less complete, IMHO, but rather by controlling the use of unsafe language features, and building an appropriate run-time sandbox (which recent Java incarnations do surprisingly well, if in a complex way: signed code is a nice concept). These are separate issues: a VM can trap ill-behaved programs, so overly restricting programmers from writing them shouldn't be necessary (if you're willing to put up with the equivalent of a run-time segfault, for example)

                So, it is possible to permit poorly-written code (by the programmer who should have been content with the safety nets that automatic GC provides, for example, but wasn't), and still retain security.

                What I want is to be able to write the low-level, tricky, blow-up-in-your-face stuff in the same language as the higher-level stuff, and be able to tell the compiler, "Don't let me do this -- it's easy to make a mistake, and the power is not needed."


    • Its simply true; there is no other language that even comes close to filling all the roles of C++. Most of the languages people advocate for taking a certain niche from C++ are implemented in C++.


      Its a very difficult language to learn, and hard to use properly. It has lots of syntax, and many idiosyncrasies. Yet it yields you control of the machine in the manner of C, adding in alot of the niceties of high level languages for those who know how to use them.


      You might argue that its less error prone for certian programmers to use a more specialized and high level language for certain tasks. You might make a good case that C++ should not be someones first learned language (I say learn assembly, then C, then C++, then some high level lang).


      What you cannot say is that C++ should be ditched. It is filling a vast role in real-world programming, where nothing else can compete.

      • I think it's a terrific point that C++ is an innapropriate development tool for many projects.

        Of course you don't want to (or couldn't) build an application that needs low-level control over a computer. Advanced databases, compilers, messaging software (a la TIBCO), or operating systems would be appropriate uses for C++ (at least IMHO). If a project is too large to use C or ASM, C++ still offers the lower level control and the advantage of OOP.

        But if you're programming a business application with extremely complex business logic, java lets you spend more time worrying about the logic than memory management. If you're writing an accounting app for the 3 ladies in HR, then VB/Access will let you whip it up in a day. (Anyone can write a somewhat neat GUI app in VB in 30 minutes. I've not found that to be the case with C++ on any platform). If you parsing 40 GB of web logs for a particular IP, then PERL might be what you're looking for....

        And all this is moot if you are a guru in a single language. If you know C++ inside and out, then why bother learning java to write the business app (compatibility, maintainence aside)?

        I suppose all I'm saying is that, all else being equal, there are circumstances when c++ is far too much to do a given task, and other language choices are faster to develop and easier to debug. And if you happen to be an expert in one language, it is difficult to make a totally objective assessment of what language choice is best.
    • Re:Blame it on C++ (Score:2, Interesting)

      by Latent Heat ( 558884 )
      Having taken Brinch Hansen's data-structures course at Caltech more years ago than I care to admit and having been indoctrinated by Brinch Hansen in the Pascal religion, I always thought that a properly restrictive language could help a lot with reliability.

      Ada was supposed to be that silver bullet (although Pascal diehards have their issues with Ada), but the darned trouble is that Ada is not showing a large-enough (maybe 5 percent) quantifiable improvement over C++.

      Java is another of these silver bullets, and the claim is that people churn out a lot more stuff, but I have not heard about reliability.

      Maybe it is how you structure a design and the code implementation is more important than the hand holding (or hand slapping) of a particular language.

      • Thank you! I'm _not_ the only one, then!

        My simple gripe with C derived languages is that their complexity means I have to dedicate more of my brain to the language and less to the program. Simply, there's more to get wrong than with Pascal family languages. Oh, and I'd much rather arrays were bounds checked so that writing out crashed rather than corrupting memory, so much easier for debugging...

        C's powerful - Pascal is pretty safe. Most of the time, I don't need the power so I'll take the safety.
  • I have always seen programming as something magical but thats probably more because of lack of knowledge than because it really is magic. Maybe its time to straighten the myth behind big software projects and get a bit more grip on how it works and why. The intent is noble and given current state of software capabilities and performance in comparison to hardware its about time. Imagine having an os that had evolved like your hardware? ...... Think *SCREEEEAAAAMMMMM* woshing numbers like nothing you have EVER seen.
  • a standard printed book value of time estimates for projects. the auto repair industry has standard estimates for certain repairs, why doesn't the software repair industry. i know they're worlds apart, but it sure would help out a little to be able to pull out a little book and say, well, you need a gui interface consisting of 15 screens to maintain 20MB of data, it's going to be 10,000 hours for developing, testing and documenting. if you want to cut the documentation, we can do that, but you're really slitting your throat there.
    • Re:i want to see (Score:3, Insightful)

      by markmoss ( 301064 )
      If you've got the requirements well enough defined in terms of previous work that you can estimate accurately, then most likely all you've got to do is cut and paste the old code anyhow...
    • Re:i want to see (Score:3, Insightful)

      by wiredog ( 43288 )
      Well, read up on the PSP [cmu.edu] and TSP, some light reading [cmu.edu].

      I've been trained in that stuff. It's wonderful in theory. In practice? All the metrics only work if you are doing the same stuff you've done before. If you are doing something new, then they don't work. Which is why few people actually use them.

      Looks good on a resume, though.

  • Open source (Score:5, Interesting)

    by bunyip ( 17018 ) on Monday April 08, 2002 @10:04AM (#3302729)
    From the article:

    Michael Godfrey, a University of Waterloo scientist, is equally hesitant but still finds the Lehman approach useful. In 2000, Godfrey and a fellow Waterloo researcher, Qiang Tu, released a study showing that several open-source software programs, including the Linux kernel and fetchmail, were growing at geometric rates, breaking the inverse squared barrier constraining most traditionally built programs. Although the discovery validated arguments within the software development community that large system development is best handled in an open-source manner, Godfrey says he is currently looking for ways to refine the quantitative approach to make it more meaningful.

    It would have been interesting had they delved deeper into this finding. Yeah, I know, the true believers in open source all feel superior (we are, aren't we?), but exploring the reasons why it works would be interesting.

    Is it the large-scale peer-review process? Is it that we occasionally rewrite parts (filesystems, VMM, etc)? Something else?
    • The study by Mr. Godfrey and Mr. Tu can be found at http://plg.uwaterloo.ca/~migod/papers/iwpse01.pdf [uwaterloo.ca]. (4 pages in a PDF file).
    • Is it that we occasionally rewrite parts (filesystems, VMM, etc)?

      Bruce's Law: Every software module needs to be re-written every year.

      (Or perhaps it has a different name.)
    • Conway's Law states that the organisation of a software project will be congruent with the organisation of the people who built it. Commercial software is generally built by putting everyone together in a single location and treating all the developers as roughly equivalent and able to work on any part of the system. The result is a monolithic heap of code.

      Open source software OTOH is built by widely separated people with narrow bandwidth links between each other and only a shared vision of the Right Thing to guide them. The result, as predicted by Conway's law, tends to be highly modular architectures focussed around a few core protocols or APIs that capture the vision.

      Modular systems are inherently more flexible and reusable than monolithic systems because they exhibit low coupling between the modules. In contrast the monolithic software is more likely to have high coupling between modules, even though they are supposedly independent.

      (There is also a related concept of "cohesion", which is the extent to which the features of each module hang together as conceptual wholes. I suspect that OSS will show higher cohesion than closed source software)

      It would be interesting to get some statistics to test this theory. Does anyone know of any good software for measuring coupling in C code? I'd like to run some commercial and OSS software through it and see what it says.

      Paul.

  • Good article; I think the description of the sociological basis of the "laws" is correct. My experience suggests that the slowest development paths are those that cross other people's areas.
    (And yes, I know about XP's "All code is shared.")

    As for the maintenance, it's my normal experience, but the prohects I've been involved in may be atypical. (*cough*Canadian*cough*telecommunications*cough*gi ant*)
    We spend a *lot* of time reworking old code to (a) fix obscure bugs, many of which are slow leaks shown up by weeks serving live traffic (b) adapt the code to support new releases of underlying hardware product and (c) adding new features to satisfy users.
  • by 3-State Bit ( 225583 ) on Monday April 08, 2002 @10:09AM (#3302758)
    Although the performance audit showed that IBM researchers were churning out code at a steady rate, Lehman found the level of debugging activity per individual software module to be decreasing at an equal rate; in other words, programmers were spending less and less time fixing problems in the code. Unless IBM programmers had suddenly figured out a way to write error-free code -- an unlikely assumption -- Lehman made a dire prediction: OS/360 was heading over a cliff. IBM, in stressing growth over source-code maintenance, would soon be in need of a successor operating system.
    Except that the "[dire] need of a successor operating system" isn't so dire at all: the world's richest man didn't get where he got by writing code that didn't need to be replaced by a successor operating system, did he? The whole premise is to produce something that works now, and when it stops working later, you sell a later version. Heck, just a couple of months ago, Billy announced that 92.3% of the calendar year would focus on new code, leaving the rest [slashdot.org] for the old.
    What's smarter, coding the Microsoft way, or coding a server that's been up since before Windows NT was released, without a patch in 7 years, handling half a megabit of data both upstream and down, every second of every day forever. Where's the revenue?

    ~r~

    Note: the 92.3% figure might only be for the year 2002, with later years being still closer to 100%.
    • OS/360 was actually heading over a cliff. The various pieces of software did not work when they were put together. The OS was delivered years late and massively over budget. Many IBM 360's (costing six figures back when $1 was worth something) were delivered and then spent years simply running emulators for the old machines they replaced, because the native software wasn't ready.

      Yes, there were lots of things they could have done -- like define a subset of the original committee-designed bloated specification, get that working, then start adding features. But the manager (Fred Brooks) didn't know that, yet, and didn't even know the project was in trouble until it was impossible to deliver anything at all on deadline. Afterwards, he wrote a book, The Mythical Man-Month, which has become a standard text for large-project management. But he learned how by doing it wrong, more massively than anyone ever had before...
    • by alispguru ( 72689 ) <bob@bane.me@com> on Monday April 08, 2002 @11:02AM (#3303040) Journal
      IBM missing premise:
      Some of our applications must have 99.999% uptime. Therefore, the whole system must be designed with this in mind.
      Microsoft missing premise:
      If our applications crash, it's no big deal. Feature lists generate revenue, so that's what we do.
      Just making explicit what you said implicitly...
  • From the article:
    "In software engineering there is no theory,"

    I don't buy that... at least not completely. I would say something more like, "In software engineering, theory is extremely underutilized."

    I believe there are many instances of engineered software, but not necessarily high-profile stuff. A lot of DoD conscripted code may never the the civilian light of day, but there are procedures and documentation requirements that, flawed or not, enforce certain practices. Can we call that "theory"? Anyhow, defense suppliers can afford the extra development time, 'cause the government is forking over big bucks for the code to right.

    For the mainstream (read desktop) apps, where all the money is, the time to market and feature pressures will continue to suppress even the best "unified theory" of software development.
    • Yeah, tha line bothered me too. But then, the phrase "software engineering" bothers me, not because it's not meaningful, but because the way it's often used implies that engineering is all there is to writing good programs. In the strictest sense, engineering (of any kind) isn't about theory -- but science is, which is one reason why I like the phrase "computer science" a lot more than some people seem to. A true computer scientist should be a good engineer, but also more -- and when you want a system that really works, and will continue to work over the course of years, that's what you need.
  • Manny Lehman is credited with coining the expression "Software Engineering". About 1968, I think. See also the website of the company he founded Imperial Software Technology [ist.co.uk].
  • We are only Human. (Score:2, Interesting)

    by gnalre ( 323830 )
    I was interested in the fact that some researchers have only recently come to the conclusion that software is written by people.

    It is questionable how useful purely statistical methods are in these situation.

    One thing I would be interested in knowing is how staff turnover effects development. For matainable software to be possible a consistent approach must be maintained on adding new functionality, this usually requires deep understanding of alrge code base, and if your programmers keep changing, the newbies may not follow the rules.
    • One thing I would be interested in knowing is how staff turnover effects development.

      Look at Software Project Dynamics: An Integrated Approach by Tarek Abdel-Hamid (ISBN 0-13-822040-9). In it he builds a model of the software development process and shows many remarkable results. Things like a high turnover rate can completely destroy productivity. He also shows that Brooks' law is a bit simplistic (You CAN add people to a late project, but it has to be done very carefully).

      Even though it's a dozen years old, it's still a very good book. It's a shame more people don't know about this. The research, as far as I can tell, still holds up well.

  • Included within the new set of laws are the Law of Continuing Growth ("The functional capability of E-type systems must be continually increased to maintain user satisfaction over the system lifetime")

    - The functional capability of the OS too, since new hardware keeps coming out

    and the Law of Declining Quality ("The quality of E-type systems will appear to be declining unless they are rigorously adapted, as required, to take into account changes in the operational environment").

    Exactly what is happening to windows? And why Linux is so successful -> Open Source like fetchmail et al being more linear in their development, all users get a stab at getting the environment right.

    But users who aren't prepared to do any work to make things better in their environment for their PC are always going to lose. But it's the same as those people who make their desks tidy and optimise them for work, and those that don't. The difference on your virtual desktop is that you can't easily hope someone else will tidy it for you...:)

  • it just goes to show that 99% of the work in creating software is in the design.
    you have to try to map out not only what you will need but what you might need in the future.
    yes, it's a near impossible task but it's the only way to avoid automatically commiting yourself to an endless cycle of patches and hacks.
    the good part is, if you can plan the project well enough then the actual coding becomes nearly trivial.
    the problem arises when the boss says 'i don't care about scalability or flexibility, i just want code now' and i have to try to explaining that i'm trying to save his ass 8 months down the line when clients (and not to mention, the boss himself) bombard us with feature requests, etc.
    • by elflord ( 9269 ) on Monday April 08, 2002 @10:40AM (#3302897) Homepage
      it just goes to show that 99% of the work in creating software is in the design. you have to try to map out not only what you will need but what you might need in the future.

      Not only is this not true, it's impossible to do this in practice. If you do this, you'll find that you still blow a lot of time on design, development takes longer, because your design is unnecessarily abstract, and your design proves inadequate for something that you need to implement further down the road. Requirements change, and this has consequences for the design. The best one can hope for is that the basic architecture is robust enough that it doesn't require a complete upheaval.

      What is necessary is a method for changing design gracefully. "Refactoring" is the best source I've seen that addresses this. Basically, you change methodically, and you test.

      • to me, that makes no sence. 99% of the work in making a good building or a good machine is in the design. design can take a year or more on alrge buldings, but the actual putting up of the structure takes 6 monts or so.

        design is everything. that is where you try to predict all the problems that might occur.

        the off the cuff stuff is just lazyness. if you can work out all the issues, you can then step it into production by mapping out how you solve the solution the rexamine any issues that might come up. once all that is done, you should have a discriptive enough approach that you could hand it to anyone with the ability to write the code you need and have them impliment it. if you ahve a well planed and descriptive design, the coders do not nessisaraly need to have anything to do with planing.
        • The basic issue is changing requirements. A contractor building high-rise apartments does not have to worry about the customer coming around when it's half built to look at it and say, "You know, I think I want a hospital instead." Programmers quite often have to deal with customers that are just about that confused -- they can't begin figuring out what they really want until they see what they asked for on the screen.

          More than that, software vendors routinely write a program, release it, then add features so they can sell it again. It's as if the builder has finished the apartment building, and now they want a factory tacked onto the north side and a Wendy's onto the east. Next year, add a hospital wing to the west. Repeat once a year for 10 years and you get one hell of a mess, but how else would M$ keep a continuing revenue stream from the same OS and Office programs?
        • to me, that makes no sence.

          I think you've invented the literary onomatopoeia.

        • design is everything. that is where you try to predict all the problems that might occur.

          Predicting problems that might occur is one thing, predicting changing requirements is another. You can reasonablt anticipate problems based on prior experience. But it's difficult to guess at changing requirements, especially when the requirements come from an external source.

      • What is necessary is a method for changing design gracefully. "Refactoring" is the best source I've seen that addresses this. Basically, you change methodically, and you test.

        I was talking about this with a friend the other day. Wouldn't it be nice if a senior software 'architect' could maintain a unit-level view AND current code at the same time? That way his busy programmers could refactor all they wanted, as long as they didn't overstep their unit bounds but at the same time improve the product. The architect could look at the project at different levels of abstraction (units, subunits) to make sure the programmers aren't getting off track.

        Probably the hardest thing about using the iterative or refactoring methodology is knowing what your architecture looks like at any given time. You design a great, flexible architecture for the first iteration, but after several rewrites you may not know where you are in terms of the big picture. Surely a tool that spits out UML-like diagrams of the current code would be very useful to spot architecture flaws introduced during the refactoring process. Effective use of design patterns may also help. Is it impossible?

        I've seen some work done by Rational in terms of code generation with .NET, but wouldn't it be nice if you could get up-to-date architecture diagram syncronization with code directly from a source versioning tool (ie. cvs, SourceSafe)?
  • Creating common APIs allows seperate development projects to proceed at their own pace. You don't need OO for this, but it helps.

    I think one of the reasons that Linux has been so successful is because Linus decided long ago to take a modular approach to designing his monolithic kernel.

    -josh
  • by Anonymous Coward on Monday April 08, 2002 @11:03AM (#3303051)
    Back in the early 1980s I headed up a small team that developed 'industrial strength' applications.
    Our firm licenced this software to major manufacturing firms with a Money Back Guarantee. As in, "If you are not satisfied, for any reason, we will either fix the problem or give you back your money. Your choice." We were never asked for a refund.

    It was semi-open source. You could have the source any time you wanted, but asking for the source voided your warranty, since problems in your data might have been caused by your own temporary code changes.

    Funny thing. I've had that on my resume for many years, but no prospective employer has ever asked how I did it.

    No one has hired me specifically to help them produce similar quality code. Much of the time their reaction to my resume is, 'but you don't know c++' (or their other favorite). I know enough about c++ to know that I want to stay away from that second generation language for all but the most specialized situations.

    I have also been told, on numerous occasions, that I'm not qualified to lead a particular project because I lack experience mannaging the large team that will be needed. I've never gained that experience because I've never needed a large team to accomplish anything.

    As an MBA, as well as being an application designer & a coder, I know that large teams do have a place -- mostly where you have a blank cheque and are earning a percentage of the total billing. (:-)
    • You're absolutely right about this. I'm another semi-old-timer. In the early 1980's, I was on the team (six people, all with developer background) to write a bisynchronous communications package (HASP station emulator). We had a standing offer--anybody who could find a bug would get a free dinner at any restaurant. We only had to pay off once.

      Nobody seems to care about doing this anymore, or maybe they never did in the first place, and we were all just naive.

  • by crouchingpenguin ( 322034 ) on Monday April 08, 2002 @11:11AM (#3303083)
    A quick warning... I consider myself a relative newborn in the world of software development. I present these opinions under the consideration that my opinions can change at any moment. =]

    A lot of the dire predictions of software atrophy and such are a result of applying the wrong methodology to a project. Yes there are uses for Software engineering, but I think this approach is overkill for even large scale projects. Check out Software Craftsmanship: The New Imperative [barnesandnoble.com] for a different perspective. A perspective I think is in need of serious consideration. The gist is returning to the days of master craftsman and apprenticeships. This focuses a bit more on the learning aspect than actual development methodologies, but you can always go to The Pragmatic Programmer [barnesandnoble.com] to fill in that gap.

    "As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress."

    This is where "refactoring" (see Fowler's Refactoring [barnesandnoble.com]) really shines. I find it difficult to believe that refining the software base is not progress. An initial revision where the code functions by its contract (if your into designing by contract), then you refactor the body of the function/method for speed / elegance. Then you can run your unit tests on the function / method to test that the refactoring session did not break any of the design contracts (whew).

    I think they may be trying to restate the broken window theory (see Pragmatic Programmer), were a broken window (or bug) in a building (or system) leads to delapidation elsewhere in the building (or system).

    And then there are the agile methods [agilemanifesto.org], including XP [extremeprogramming.org]. I think these answer a lot of the limitations and issues with Software Engineering practices. Interacting with clients (having a client there during each iteration) gives you the benefit of almost real-time feedback so that you can update your user stories on the fly, etc.

    Without rambling on any farther, my point is not too spend too much time looking for a specific unified theory. Read up about all the ideas, methods, and theories. Take the best parts from each, then crank the knob all the way up (if I may borrow that from XP =] ). Don't let anyone tell you there is a science to software development that is easy to reproduce, and that you are just a link in the overall chain. You practice and perform a craft. Enjoy it!
    • As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress.

      We had a case were a system no longer proved ameniable to feature addition or continual improvement to match the changing operational and customer requirements. In the end the benefits of refactoring the codebase to match the changing production requirements were more costly than to rewrite the system using more modern libraries, methodologies and frameworks. It got rewritten and the old system phased out.

      It wasnt a case of "fixing" inherently broken software, it worked perfectly well, just the operational flow it supported changed due to new customers and more efficient management procedures.

      Incidentally we have found with each major rewrite of that system ( there has been two ) there has been an immediate growth spurt in customers. I am not sure if it is because it looks like something new, or that the software better matches the operational requirements or because of increasing feature addition. Either way the last two rewrites have been paid for almost immediately by the addition customers the new software has brought in.

      mocom--

      • Your example shows that there are sometimes benefits to a rewrite, rather than just refactoring or updating the existing codebase.

        Your requirements (features and design) outgrow the current application and warrant the need for a new application that encompasses the old application's functionality as well as it's name. So really you have two seperate applications that share functionality and name.

        But isn't that in itself refactoring? Rewriting code, keeping the functionality of the original while improving the internals?
  • by catfood ( 40112 ) on Monday April 08, 2002 @01:59PM (#3304101) Homepage

    From the article:

    Michael Godfrey, a University of Waterloo scientist, is equally hesitant but still finds the Lehman approach useful. In 2000, Godfrey and a fellow Waterloo researcher, Qiang Tu, released a study showing that several open-source software programs,
    including the Linux kernel and fetchmail, were growing at geometric rates, breaking the inverse squared barrier constraining most traditionally built programs.

    Is fetchmail [tuxedo.org] complex enough that it needs to be growing geometrically? I mean yeah, fetchmail does a lot, and I do know what "geometric" means. Still, I doubt the world of email is changing fast enough that you'd want to choose that as your example of out-of-control software maintenance.

    [Insert obligatory ESR goading.]

  • Where I work, it has been a commonly held belief that all software evolves until such time as it can send and receive email. If it doesn't do this, it isn't complete. :)

    Jason Pollock

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...