A Unified Theory of Software Evolution 232
jso888 writes "Salon has a nice article today on Meir Lehman's work on how software evolves and is developed. Lehman's investigation of the IBM OS/360 development process became the foundation for Brooks' Law: "Adding manpower to a late software project makes it later." He is hopeful that his work will make software development less of an art and more of an engineering science."
Evolution is a MYTH!!! (Score:3, Funny)
Please check your crackpot theories and psuedo-science at the door.
Thank you.
Read the bloody dictionary! (Score:1, Informative)
evolution
n.
A gradual process in which something changes into a different and usually more complex or better form. See Synonyms at development.
a. The process of developing.
b. Gradual development.
Seems like quite an appropriate term for 90% of the software projects I have ever worked on.
Re:Evolution is a MYTH!!! (Score:1)
I don't know whether to laugh or cry.
Re:Evolution is a MYTH!!! (Score:2, Insightful)
Re:Evolution is a MYTH!!! (Score:3, Informative)
This is called error seeding and is used to evaluate the performance of testing systems (including the people doing it).
On the other hand, I dimmly remember something about not to do it with production code....
Re:Evolution is a MYTH!!! (Score:2)
With all the idiotic things being written today, nobody understands a satiric comment anymore.
And the "+1 Informative" really concerns me. I expected a "+1 Funny", if anything at all.
Re:Evolution is a MYTH!!! (Score:2)
1) Software doesn't evolve by chance, folks, it is DESIGNED by its CREATORS. Please check your crackpot theories and psuedo-science at the door. /. is a site for SERIOUS INTELLECTUAL DISCUSSION.
So I take it you are in favor of creationism?
Sorry, but the parallels were so obvious ... ;)
2) For extra ammo, Lehman also has expanded the graphs and data from his original studies in the 1970s. Taken together, they show most large software programs growing at an inverse square rate -- think of your typical Moore's Law growth curve rotated 180 degrees -- before succumbing to over-complexity.
I am not holding my breath waiting for Microsoft to keel over into a monstrous pile of cyberwreckage any time soon.
Re:Evolution is a MYTH!!! (Score:2)
Since you ask so politely ....
well let's see:
This i a similarity that amused me slightly.Now of course:
relates to the facte that creationism is often characterized as a crack pot theory.And so the cognitive dissonance of the two juxtasposed to each other seemed funny. Plus the fact that the author obviously would not intend such a similarity as it would be destructive to his/her/it's very viewpoint.
As both probably are taking the situation just a tad too seriously.
and these things are what made it funny to me.
I make no claims to my education or my intelligence. A man has got to know his limitations.
Re:Evolution is a MYTH!!! (Score:2)
Re:Evolution is a MYTH!!! (Score:3, Informative)
I'm not sure whether this was supposed to be funny or whether other readers are interpreting it as funny. There have also been a few stabs at the parallels with life (evolution vs creation, etc). Hrm...whatever. From the article: "The gap between biological evolution and artificial systems evolution is just too enormous to expect to link the two,"
In all seriousness, I've seen so many project managers use evolution (not the theory explained in the article, however) as some sort of methodology for their projects and I have not seen any of those projects truely succeed. The idea that you throw something, anything, out there, find out what's good/bad with it, then re-iterate the design and development based on findings is such a random and expensive process. I've seen so many programmers put in half-assed functionality, especially on front-end code, just so that they'll let testers and "usability" experts fix the problem and fix it in the next release. This is like throwing a chunk of randomly chipped wood out and hoping that others can tell you how to sand it down to something usable.
Cooper makes this analogy in "The Inmates are Running the Asylum" (link here [amazon.com]) and bashes project teams that take on this sort of process of evolution. He poses a process of almost completely up-front design by building to a theoretical user persona and culling out complexity by ditching features that will never be used by this persona.
Now Cooper's views don't necessarily contradict Lehman's (at least from what I've seen in the article). In fact at a glance they seem to blend in nicely.
From the article, again: Figure out how to control the various feedback loops -- i.e. market demand, internal debugging and individual developer whim -- and you can stave off crippling over-complexity for longer periods of time
It's clear that he means that we, as programmers, should be willing to throw away a shitload of code. I agree with this. I think there's a huge belief in re-use (I tend to it myself) among programmers for both practical and personal (pride...having spent weeks on certain code) reasons. But there are so many cases where the re-use of a small feature among others in bloated code can really complicate and bog down the overall code-base, or where the functionality of certain re-used code doesn't really fit, but so much investment has been made that it might as well be re-used.
Developers really do need to listen to the feedback provided by the marketplace and other forces. I'm not certain if the unified theory is so unified, but it's a valid perspective and blends in with other published sentiments on software development methodology.
'nuff rambling...
Brooks' Law (Score:2, Interesting)
This can be simplified: "Adding manpower to a software project makes it later."
There's rarely that many programmers needed for a given task anyway. You need a project leader and lots of monkeys to test it... very few projects should have more than 10 programmers (if any).
Re:Brooks' Law (Score:4, Funny)
You realize you just suggested that very few software projects should have any programmers. How is the project going to get completed without anyone working on it?
Re:Brooks' Law (Score:3, Funny)
My boss seems to think that having a lot of meetings about it will do the trick.
Re:Brooks' Law (Score:2)
There's a guy I work with, and that should be his
Re:Brooks' Law (Score:2)
At some point you will have meetings to discuss the meetings.
At a later point you will do nothing else than having meetings, but your project will still be further and further delayed....
Re:Brooks' Law (Score:1)
Maybe there is a deeper truth in it. Maybe many software projects would be better off without programmers?
Re:Brooks' Law (Score:2)
But seriously, one interesting implication is that forks/parallel projects can be a Good Thing! If you have two 10-person projects, and throw one away, you're better off than with a 20-person project.
(Linux filesystems seem to have taken this to heart...
Re:Brooks' Law (Score:3, Funny)
Read my lips: E-VO-LU-TION
Example:
Start with "printf("Hello World\n");" and leave it in a warm, wet place for a few months, feed it with some
I have a strong belief that's what they did with Mozilla
Re:Brooks' Law (Score:2, Insightful)
Re:Brooks' Law (Score:3, Insightful)
The number of programmers needed on a project depends upon the number of software modules in said project. Each programmer working on their module and with the other programmers and project managers for the sake of integration and communication between modules talking to each other. I am not a project manager so I do not have the magic formula but there needs to be some serious research in the IT industry of how many programmers are needed per project for the number of independent sections or modules of software being created.
Then and only then will you have a situation of better utilized output over a large group of programming talent.
10 may me too few programmers for some huge program and too many for plenty of other projects.
_______________________________________________
Re:Brooks' Law (Score:2)
As long as there's a product management group who can drop "Oh by the way" new requirements on the product a week after code freeze, there'll always be a problem.
Re:Brooks' Law (Score:2)
The funny thing is that I have some two other problems arise from Brooke's law.
1) Not wanted to justify more resources to a project the managers simply roll in features into existing modules that should really be seperated out. The same number of programmers end up getting hit twice as hard.
2) Managers have actually quoted this law to others as a excuse for not assigned more resources to a project. Many times in smaller software houses the problem you have is getting the programmer resources for a project. Only the larger houses run into the problem of justing throwing people at a problem to make their deadlines and not having that work the way they wanted it to.
_______________________________________________
Re:Brooks' Law (Score:2)
I have to disagree there. A project without any programmers will certainly face some dificulties in the end
But 10 is a good upper bound. That is still enough that they can know each other talk to each other in case of questions.
Re:Brooks' Law (Score:2)
Brook's law can't be used (Score:4, Insightful)
Re:Brook's law can't be used (Score:4, Funny)
True. I used to file status reports using Zeno's Work Estimation. On each report I just halfed the percentage of remaining work.
Re:Brook's law can't be used (Score:2)
I don't understand your point. If Brook's Law is valid, then it's useful not only in retrospect, but at any time, I think.
While it may be true that most software projects have no idea how far behind they really are, that has little to do with Brook's Law. Brooks's Law doesn't say "adding manpower to a project that might be late will only make it late", it says "adding manpower to a project that's already late will only make it later". Certainly, most software projects know if they are already late, they may not know by how much, but if Brooks's law holds (in retrospect or not), do they really want to be even later?
Now if you are referring to the fact that projects sometimes miss milestones and are predicted to be late, when in fact they won't be, what's the point of adding manpower in this situation?
It seems to me that Brooks's Law at all phases of a projects lifetime. It even holds at the project start. If the project is already late before it's started, it will certainly get later if you staff it. :-) Better to start a different project with a more realistic deadline, no?
Having said this, I understand that some people have done some excellent work in how you can avoid Brooks's Law to some extent. This work qualified exactly how to add manpower to a late project to help in various ways. I don't have the reference handy, though.
Can We Say... (Score:1)
"Adding manpower to a late software project makes (Score:2, Funny)
Seek and thou shalt find??? (Score:1, Funny)
Poor, poor man; he'll never find it I'm afraid!
mirror anyone? (Score:1)
i found it! :)) (Score:1, Redundant)
April 8, 2002 | The office of Meir "Manny" Lehman is a cozy one. Located on the outer edge of the Imperial College of Technology campus in South Kensington, London, it offers room for all the basic amenities: a desk, two chairs, a Macintosh G4 and a telephone. Still, for a computer scientist nearing the end of a circuitous 50-year career, the coziness can be a bit confining.
"You'll have to forgive me," apologizes Lehman at one point, sifting through a pile of research papers on a nearby shelf. "Since I lost my secretary, I can't seem to find anything."
The pile, a collection of recently published papers investigating the topic of software evolution, a topic Lehman helped inaugurate back in the 1970s, is something of a taunting tribute. Written by professional colleagues at other universities, each paper cites Lehman's original 1969 IBM report documenting the evolutionary characteristics of the mainframe operating system, OS/360, or his later 1985 book "Program Evolution: Processes of Software Change," which expands the study to other programs. While the pile's growing size offers proof that Lehman and his ideas are finally catching on, it also documents the growing number of researchers with whom Lehman, a man with dwindling office space and even less in the way of support, must now compete.
"And to think," says Lehman, letting out a dry laugh. "When I first wrote about this topic, nobody took a blind bit of notice."
Software evolution, i.e. the process by which programs change shape, adapt to the marketplace and inherit characteristics from preexisting programs, has become a subject of serious academic study in recent years. Partial thanks for this goes to Lehman and other pioneering researchers. Major thanks, however, goes to the increasing strategic value of software itself. As large-scale programs such as Windows and Solaris expand well into the range of 30 to 50 million lines of code, successful project managers have learned to devote as much time to combing the tangles out of legacy code as to adding new code. Simply put, in a decade that saw the average PC microchip performance increase a hundredfold, software's inability to scale at even linear rates has gone from dirty little secret to industry-wide embarrassment.
"Software has not followed a curve like Moore's Law," says University of Michigan computer scientist John Holland, noting the struggles of most large-scale software programs during a 2000 conference on the future of technology. "In order to make progress here it is not simply a matter of brute force. It is a matter of getting some kind of relevant theory that tells us where to look."
For Lehman, the place to look is within the software development process itself, a system Lehman views as feedback-driven and biased toward increasing complexity. Figure out how to control the various feedback loops -- i.e. market demand, internal debugging and individual developer whim -- and you can stave off crippling over-complexity for longer periods of time. What's more, you might even get a sense of the underlying dynamics driving the system.
Lehman dates his first research on the topic of software evolution back to 1968. That was the year Lehman, then working as a researcher at IBM's Yorktown Heights facility, received an assignment to investigate IBM's internal software development process. Managers at rival Bell Labs had been crowing about per-developer productivity, and IBM managers, feeling competitive, wanted proof that IBM developers were generating just as many lines of code per man-year as their AT&T counterparts.
Lehman looked at the development of OS/360, IBM's flagship operating system at the time. Although the performance audit showed that IBM researchers were churning out code at a steady rate, Lehman found the level of debugging activity per individual software module to be decreasing at an equal rate; in other words, programmers were spending less and less time fixing problems in the code. Unless IBM programmers had suddenly figured out a way to write error-free code -- an unlikely assumption -- Lehman made a dire prediction: OS/360 was heading over a cliff. IBM, in stressing growth over source-code maintenance, would soon be in need of a successor operating system.
Although IBM executives largely ignored the report, Lehman's prediction was soon borne out. By 1971, developers had encountered complexity problems while attempting to install virtual memory into the operating system, problems which eventually forced the company to split the OS/360 code base into two, more easily manageable offshoots. The linear growth curve that seemed so steady in the 1960s suddenly looked like the trail of a test missile spiraling earthward.
Lehman's report would eventually earn a small measure of fame when University of North Carolina professor and former OS/360 project manager Frederick P. Brooks excoriated the IBM approach to software management in his 1975 book "The Mythical Man Month." Using Lehman's observations as a foundation for his own "Brooks Law" tenet -- "adding manpower to a late software project makes it later" -- Brooks argued that all software programs are ultimately doomed to succumb to their own internal inertia.
"Less and less effort is spent on fixing original design flaws; more and more is spent on fixing flaws introduced by earlier fixes," wrote Brooks. "As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress."
By 1975, Lehman, with the help of fellow researcher Laszlo Belady, was well on the way to formulating his own set of laws. A quarter century after their creation, the laws read like a mixture of old developer wisdom and common textbook physics. Take, for example, Lehman's "Second Law" of software evolution, a software reworking of the Second Law of Thermodynamics.
"The entropy of a system increases with time unless specific work is executed to maintain or reduce it."
Such statements put Lehman, who would leave IBM to take a professorship at Imperial College, into uncharted waters as a computer scientist. Halfway between the formalists, old-line academics who saw all programs as mathematical proofs in disguise, and the realists, professional programmers who saw software as a form of intellectual duct tape, Lehman would spend the '70s and '80s arguing for a hybrid point of view: Software development can be predictable if researchers were willing to approach it at a systems level.
"As I like to say, software evolution is the fruit fly of artificial systems evolution," Lehman says. "The things we learn here we can reapply to other studies: weapon systems evolution, growth of cities, that sort of thing."
That Lehman conspicuously leaves out biological systems is just one reason why his profile has slipped over the last decade. At a time when lay authors and fellow researchers feel comfortable invoking the name of Charles Darwin when discussing software technology, Lehman holds back. "The gap between biological evolution and artificial systems evolution is just too enormous to expect to link the two," he says.
Nevertheless, Lehman aspires to the same level of intellectual impact. While he was in retirement during the early 1990s, his early ideas jelled into one big idea: What if somebody were to formulate a central theory of software evolution akin to Darwin's theory of natural selection? In 1993, Lehman took an emeritus position at Imperial College and began work on the FEAST Hypothesis. Short for Feedback, Evolution and Software Technology, FEAST fine-tunes the definition of evolvable software programs, differentiating between "S-type" and "E-type": S-type or specification-based programs and algorithms being built to handle an immutable task, and "E-type" programs being built to handle evolving tasks. Focusing his theory on the larger realm of E-type programs, Lehman has since expanded his original three software laws to eight.
Included within the new set of laws are the Law of Continuing Growth ("The functional capability of E-type systems must be continually increased to maintain user satisfaction over the system lifetime") and the Law of Declining Quality ("The quality of E-type systems will appear to be declining unless they are rigorously adapted, as required, to take into account changes in the operational environment"). For added measure, Lehman has also thrown in the Principle of Software Uncertainty, which states, "The real world outcome of any E-type software execution is inherently uncertain with the precise area of uncertainty also unknowable."
While the new statements still read like glossed-over truisms, Lehman says the goal is to get the universal ideas on paper in the hopes that they might lead researchers to a deeper truth. After all, saying "objects fall down instead of up" was a truism until Sir Isaac Newton explained why.
"Whenever I talk, people start off with blank faces," Lehman admits. "They say, 'But you haven't told us anything we didn't already know.' To that I say, there's nothing to be ashamed of in coming up with the obvious, especially when nobody else is coming up with it."
For extra ammo, Lehman also has expanded the graphs and data from his original studies in the 1970s. Taken together, they show most large software programs growing at an inverse square rate -- think of your typical Moore's Law growth curve rotated 180 degrees -- before succumbing to over-complexity.
Whether the curves serve as anything more than a conversation-starter is still up for debate. Chris Landauer, a computer scientist at the Aerospace Corporation and a fellow guest speaker with Lehman at a February conference on software evolution at the University of Hertfordshire, was impressed by the Lehman pitch.
"He has real data from real projects, and they show real phenomena," Landauer says. "I've seen other sets of numbers, but these guys have something that might actually work."
At the same time, however, Landauer wonders if the explanation for similar growth trajectories across different systems isn't "sociological." In other words, do programmers, by nature, prefer to add new code rather than substitute or repair existing code? Landauer also worries about whether the use of any statistic in an environment as creative as software development leads to automatic red herrings. "I mean, how long does it take a person to come up with a good idea?" Landauer asks. "The answer is we just don't know."
Michael Godfrey, a University of Waterloo scientist, is equally hesitant but still finds the Lehman approach useful. In 2000, Godfrey and a fellow Waterloo researcher, Qiang Tu, released a study showing that several open-source software programs, including the Linux kernel and fetchmail, were growing at geometric rates, breaking the inverse squared barrier constraining most traditionally built programs. Although the discovery validated arguments within the software development community that large system development is best handled in an open-source manner, Godfrey says he is currently looking for ways to refine the quantitative approach to make it more meaningful.
"It's as if you're trying to talk about the architecture of a building by talking about the number of screws and two-by-fours used to build it," he says. "We don't have any idea of what measurement means in terms of software."
Godfrey cites the work of another Waterloo colleague, Rick Holt, as promising. Holt has come up with a browser tool for studying the degree of variation and relationship between separate offshoots of the original body of source code. Dubbed Beagle, the tool is named after the ship upon which Charles Darwin served as a naturalist from 1831 to 1836.
Like Landauer, Godfrey expresses concern that a full theory of software evolution might be too "fuzzy" for most engineering-minded programmers. Still, he credits Lehman for opening the software field to newer, more intriguing lines of inquiry. "It's the gestalt 'Aha' of his work that I find more interesting than the numbers," Godfrey says.
For Lehman, the lack of a scientific foundation to the software-engineering field is all the more reason to keep digging. Fellow researchers can quibble over the value of judging software in terms of total lines of code, but until they come up with better metrics or better theories to explain the data, software engineering will always be one down in the funding and credibility department. A former department head, Lehman recalls the budgetary battles and still chafes over the slights incurred. Now, as he sits in a cramped office, trying to recruit new corporate benefactors and a new research staff, he must deal once again with those who label software development a modern day form of alchemy -- i.e. all experiment but no predictable result.
"In software engineering there is no theory," says Lehman, echoing Holland. "It's all arm flapping and intuition. I believe that a theory of software evolution could eventually translate into a theory of software engineering. Either that or it will come very close. It will lay the foundation for a wider theory of software evolution."
When that day comes, Lehman says, software engineers will finally be able to muscle aside their civil, mechanical and electrical engineering counterparts and take a place at the grown-ups' table. As for getting bigger offices, well, he sees that as a function of showing the large-scale corporations that fund university research how to better control software feedback cycles so their programs stay healthier longer. Until then, the search for a theory has rendered Lehman less of a Darwin and more of an Ahab -- a man in search of both fulfillment and a little revenge.
Whow! He's still living? (Score:3, Interesting)
"When I first wrote about this topic, nobody took a blind bit of notice."
No, sir, I did and many collegues who were also interested in good timely work. We lent your books to each other with the notion "that's something you should read".
Great to hear that you are still alife and enjoying to give programmers and their managers something to look at and something worth to read and think about.
Youngsters, better pay respect to this old software camel with the hole in the sole of his shoe (and probably also in his all-too British pullover), or I DDOS your toilet!
The key point is paragraph 9 (Score:5, Insightful)
Which means that commerical systems don't so much evolve as stub their growth paths out and switch direction or spawn new generations because embedded complexity has killed off the feasibility of maintaining it. In other words, all new releases are the cause of and ultimately an attempt to escape from, the chimera that is overly complex code. In commercial terms this should be astounding. We're paying to gronk up our own because we erroneously believe the NEXT version will be something radically new and elegant which of course it can't be.
New Version "x+1.y" is simply an ejection seat.
Re:The key point is paragraph 9 (Score:2, Insightful)
19,000 Known Bugs in OS/360 (Score:3, Interesting)
We installed new Releases about once every 6 months. IBM also had 'patches' available for about 19,000 known bugs.
These patches were not incorporated into the latest release because each of them, if installed, broke some other aspect of the OS.
We, and every other site, only installed those patches needed to work around problems that the particular site encountered. And you always hoped that today's patch would not break something else that your users needed.
Re:19,000 Known Bugs in OS/360 (Score:2)
These patches were not incorporated into the latest release because each of them, if installed, broke some other aspect of the OS. We, and every other site, only installed those patches needed to work around problems that the particular site encountered. And you always hoped that today's patch would not break something else that your users needed.
hmm, that sounds suspiciously like the Linux 2.4 "stable" kernel..
Actually the key point is Lehman's "Second Law" (Score:2, Funny)
"The entropy of a system increases with time unless specific work is executed to maintain or reduce it."
As evidenced by the back of my Subaru.
Blame it on C++ (Score:3, Insightful)
I'm not attempting to flamebait here, just submitting an observation. It seems to me that many of the complexity issues can be overcome by designing better languages. I've never stopped scratching my head over the perseverance of old languages like C++ and FORTRAN. Sure, they are extremely useful in the hands of experienced folks, but they need to die. They were good solutions to problems decades ago, but so much has been learned since then and the constraints of sparse computer resources and CPU speed have moved a lot.
Re:Blame it on C++ (Score:3, Insightful)
C++ ? (Score:2)
Re:C++ ? (Score:2)
So it is accurate to say that C++ has only been standardized recently. But unless you're comparing C++ to Fortran/Simula/Algol, it is just wrong to call it "new".
Blame it on the programmers (And Hiring Managers) (Score:2)
There's a lot of piss poor code out there because there are a lot of piss poor programmers out there -- people who should not be in this industry, people who took a couple of classes in VB and think that qualifies them for the title of "Programmer." And they can still bullshit their way past hiring managers with their shiny buzzwords.
Re:Blame it on C++ (Score:3, Interesting)
Now, this wouldn't be bad, if the skilled programmer had, at his disposal, the means to tweak the garbage collector implementation to suit a particular application -- presuming that there is one and and only one universally "best" garbage collector is arrogant and short-sighted. The trouble is, even though it may be possible to replace the Java garbage collector, one can't do it with a Java implementation: the language is not closed with regard to it's run-time requirements -- garbage collectors need to manage raw memory via, ta da, pointers! This lack of closure, preventing a language's run-time library from being expressed in the language itself, is most inelegant.
Of course, the C and C++ affecionados will point to this closure as the very beauty of their preferred language. Let's call such languages "complete". Alas, the linguistic power necessary to make a language complete has now been put into the hands of the neophyte programmer (was that delete or delete[], and when does it matter?).
It doesn't take much inspiration to see that subsets of a complete language, while not complete themselves, may still be powerful enough to write useful programs. With abstractions, disciplined programers try to fake this: the C++ "smart pointer" exercise is classic. Unfortunately, for all the effort put into smart-pointers and per-class address-of operator definitions, you can still get a real pointer to an object which does not implement such a monadic operator. What you really want is the compiler to say, "Bad programmer: using a real pointer!" either as a warning or as a fatal error (well, maybe not so harshly, but you get the idea).
IBM Jalapeno - JVM in Java (Score:2)
access to machine registers and memory
architecture specific machine instructions
transfer of execution to an arbitrary address
coerce object refs to addresses and back
invoke OS services
This doesn't mean that you can't write GC in Java! IBM implemented a JVM and GC system entirely in Java, called Jalapeno. To do this, they created a Java class called "Magic" that had empty methods for these services which any Java compiler could build. Then, the internal Jalapeno VM compiler would recognize calls to the Magic class, verify that what they are compiling is a valid part of the JVM and inline appropriate machine code where these calls occur.
Now, all GC systems can be written in reference to this Magic class and porting the VM is simply a matter of generating appropriate machine code for these half-dozen methods. And you get all the security of Java's automatic memory management model!
Check the ACM's OOPSLA Conference Proceedings, 1999, Implementing Jalapeno in Java or www.research.ibm.com/jalapeno [ibm.com] for the paper.
Re:IBM Jalapeno - JVM in Java (Score:2)
At first glance, this looks like cheating: the Java GC requires VM support. Is the VM written in Java? If not, the language is not complete, as I've defined complete.
Re:IBM Jalapeno - JVM in Java (Score:2)
This still strikes me as cheating: they changed the language to make it complete. Furthermore, the "complete" language is available only when compiling the JVM, and not when a general "I know what I'm doing" flag is set (though that's probably trivial to change). Finally, effecting language completeness via reserved words instead of symbols which are syntactically "more sugary" strikes me as clumsy, though I've not looked at their GC implementations using this technique.
There are two problems here: language completeness, and restriction of complete languages to particular subsets. IBM appears to have clumsy solutions to both issues w.r.t. Java, with the latter easier to clean up. I doubt that there would be an elegant solution to the original problem of language completeness vis a vis Java that they face, so I can't be too critical of their "Magic" class hack.
Re:IBM Jalapeno - JVM in Java (Score:2)
Hostile code should not have the option of saying that "it's OK, I know what I'm doing." You could use multi-layer zones like MSFT did with .NET, but then you're undermining the appeal of the system - that everything is safe and you don't really have to trust anybody to run their code. It also prevents trojans from riding along in "trustworthy" code or just stupid things like unintentional bad pointer arithmetic or array bounds checking in non-hostile code. (I know, I know - JNI, but "Pure" Java programs are safe)
Re:IBM Jalapeno - JVM in Java (Score:2)
Right. And that makes it inelegant as a general purpose programming language. As an "easy to use" language geared toward virtual machine interpretation in various "safe" (in the sandbox sense) environments, it's fine, but it's lack of completeness means that VM implementations (or the compilers that compile them) have to be written in a different language.
The point to not having an "I know what I'm doing" flag for unsafe operation is that a) it's not really necessary, except for implementing a tiny tiny bit of the way-down-low-guts of the runtime and b) it makes security a lot simpler.
Perhaps, but security and programmers' safety nets should not be provided by making a language less complete, IMHO, but rather by controlling the use of unsafe language features, and building an appropriate run-time sandbox (which recent Java incarnations do surprisingly well, if in a complex way: signed code is a nice concept). These are separate issues: a VM can trap ill-behaved programs, so overly restricting programmers from writing them shouldn't be necessary (if you're willing to put up with the equivalent of a run-time segfault, for example)
So, it is possible to permit poorly-written code (by the programmer who should have been content with the safety nets that automatic GC provides, for example, but wasn't), and still retain security.
What I want is to be able to write the low-level, tricky, blow-up-in-your-face stuff in the same language as the higher-level stuff, and be able to tell the compiler, "Don't let me do this -- it's easy to make a mistake, and the power is not needed."
Re:IBM Jalapeno - JVM in Java (Score:2)
The problem with languages like C++, which can hide memory management behind things like smart pointers, is that there is no means to force the compiler to prohibit the use of things other than smart pointers in a particular piece of code. Thus, there's no way to tell where particularly knarly memory leaks and wild pointers are likely to lurk in a large body of source code. Java solves this problem by prohibiting access to raw memory, but this comes at the price of not being able to directly manage memory in Java (compiler and language extention hacks aside). Sometimes you want memory managed for you and sometimes you don't.
My lament is that while this protection is a nice attribute of Java, it's implementation, via what I call a non-complete language, well, sucks. The same protection should be available in complete languages (i.e. those that can self-bootstrap) via compiler pragmas. This would offer the benefits of Java to the C++ programmer, without the awkwardness (and Java, with an up-to-date run-time environment, has some nice benefits, not the least of which is signed code).
Re:Blame it on C++ (Score:2)
While a Java Java interpreter might make little sense, a Java compiler in Java, would be an interesting thing.
javac, kopi (Score:2)
There is nothing to replace it (Score:2)
Its simply true; there is no other language that even comes close to filling all the roles of C++. Most of the languages people advocate for taking a certain niche from C++ are implemented in C++.
Its a very difficult language to learn, and hard to use properly. It has lots of syntax, and many idiosyncrasies. Yet it yields you control of the machine in the manner of C, adding in alot of the niceties of high level languages for those who know how to use them.
You might argue that its less error prone for certian programmers to use a more specialized and high level language for certain tasks. You might make a good case that C++ should not be someones first learned language (I say learn assembly, then C, then C++, then some high level lang).
What you cannot say is that C++ should be ditched. It is filling a vast role in real-world programming, where nothing else can compete.
Re:There is nothing to replace it (Score:2)
Of course you don't want to (or couldn't) build an application that needs low-level control over a computer. Advanced databases, compilers, messaging software (a la TIBCO), or operating systems would be appropriate uses for C++ (at least IMHO). If a project is too large to use C or ASM, C++ still offers the lower level control and the advantage of OOP.
But if you're programming a business application with extremely complex business logic, java lets you spend more time worrying about the logic than memory management. If you're writing an accounting app for the 3 ladies in HR, then VB/Access will let you whip it up in a day. (Anyone can write a somewhat neat GUI app in VB in 30 minutes. I've not found that to be the case with C++ on any platform). If you parsing 40 GB of web logs for a particular IP, then PERL might be what you're looking for....
And all this is moot if you are a guru in a single language. If you know C++ inside and out, then why bother learning java to write the business app (compatibility, maintainence aside)?
I suppose all I'm saying is that, all else being equal, there are circumstances when c++ is far too much to do a given task, and other language choices are faster to develop and easier to debug. And if you happen to be an expert in one language, it is difficult to make a totally objective assessment of what language choice is best.
Re:Blame it on C++ (Score:2, Interesting)
Ada was supposed to be that silver bullet (although Pascal diehards have their issues with Ada), but the darned trouble is that Ada is not showing a large-enough (maybe 5 percent) quantifiable improvement over C++.
Java is another of these silver bullets, and the claim is that people churn out a lot more stuff, but I have not heard about reliability.
Maybe it is how you structure a design and the code implementation is more important than the hand holding (or hand slapping) of a particular language.
Re:Blame it on C++ (Score:2)
My simple gripe with C derived languages is that their complexity means I have to dedicate more of my brain to the language and less to the program. Simply, there's more to get wrong than with Pascal family languages. Oh, and I'd much rather arrays were bounds checked so that writing out crashed rather than corrupting memory, so much easier for debugging...
C's powerful - Pascal is pretty safe. Most of the time, I don't need the power so I'll take the safety.
That line is old, tired, and wrong (Score:2)
He has an excellent point. (Score:1)
i want to see (Score:2)
Re:i want to see (Score:3, Insightful)
Re:i want to see (Score:3, Insightful)
I've been trained in that stuff. It's wonderful in theory. In practice? All the metrics only work if you are doing the same stuff you've done before. If you are doing something new, then they don't work. Which is why few people actually use them.
Looks good on a resume, though.
Open source (Score:5, Interesting)
Michael Godfrey, a University of Waterloo scientist, is equally hesitant but still finds the Lehman approach useful. In 2000, Godfrey and a fellow Waterloo researcher, Qiang Tu, released a study showing that several open-source software programs, including the Linux kernel and fetchmail, were growing at geometric rates, breaking the inverse squared barrier constraining most traditionally built programs. Although the discovery validated arguments within the software development community that large system development is best handled in an open-source manner, Godfrey says he is currently looking for ways to refine the quantitative approach to make it more meaningful.
It would have been interesting had they delved deeper into this finding. Yeah, I know, the true believers in open source all feel superior (we are, aren't we?), but exploring the reasons why it works would be interesting.
Is it the large-scale peer-review process? Is it that we occasionally rewrite parts (filesystems, VMM, etc)? Something else?
Mr. Godfey and Mr. Tu's report (Score:2)
Re:Open source (Score:2)
Bruce's Law: Every software module needs to be re-written every year.
(Or perhaps it has a different name.)
Conway's Law (Score:2)
Open source software OTOH is built by widely separated people with narrow bandwidth links between each other and only a shared vision of the Right Thing to guide them. The result, as predicted by Conway's law, tends to be highly modular architectures focussed around a few core protocols or APIs that capture the vision.
Modular systems are inherently more flexible and reusable than monolithic systems because they exhibit low coupling between the modules. In contrast the monolithic software is more likely to have high coupling between modules, even though they are supposedly independent.
(There is also a related concept of "cohesion", which is the extent to which the features of each module hang together as conceptual wholes. I suspect that OSS will show higher cohesion than closed source software)
It would be interesting to get some statistics to test this theory. Does anyone know of any good software for measuring coupling in C code? I'd like to run some commercial and OSS software through it and see what it says.
Paul.
Re:Open source (Score:2)
Big open source projects attract more than the best. However the bad programers generally make slow progress, so in most cases a better programer gets it done first. (and the bad programer learns by reading the good programers code, and understand it having tried to do the same himself). For the few cases when a bad programer submits something, the good programers reject the patch, and the bad program either acts on the suggestions, or a good programer re-writes it to be good.
Big buisness can't afford that. If a bad programers turns in some working, but bad code we use it because time it money. Our good programers are better used to devolpe some new feature that can sell, not re-write the bad parts. the only exception is after service costs prove the bad code is costing more money to maintian than a re-write would cost.
Re:Open source (Score:2)
I agree, except that it costs time. To get a product to market is always important. Sure it will cost more money down the road, but we have to get something out there now, or customers will buy from someone else. It is much easier to get someone to buy your product over a compittor than to get them to switch from a okay competitor to your product when your is better. So you make it work okay today, sell it, and then fix it and send out upgrades.
Describes my job perfectly... (Score:2)
(And yes, I know about XP's "All code is shared.")
As for the maintenance, it's my normal experience, but the prohects I've been involved in may be atypical. (*cough*Canadian*cough*telecommunications*cough*g
We spend a *lot* of time reworking old code to (a) fix obscure bugs, many of which are slow leaks shown up by weeks serving live traffic (b) adapt the code to support new releases of underlying hardware product and (c) adding new features to satisfy users.
Sound premises. Sound reasoning. Wrong conclusion. (Score:3, Interesting)
Except that the "[dire] need of a successor operating system" isn't so dire at all: the world's richest man didn't get where he got by writing code that didn't need to be replaced by a successor operating system, did he? The whole premise is to produce something that works now, and when it stops working later, you sell a later version. Heck, just a couple of months ago, Billy announced that 92.3% of the calendar year would focus on new code, leaving the rest [slashdot.org] for the old.
What's smarter, coding the Microsoft way, or coding a server that's been up since before Windows NT was released, without a patch in 7 years, handling half a megabit of data both upstream and down, every second of every day forever. Where's the revenue?
~r~
Note: the 92.3% figure might only be for the year 2002, with later years being still closer to 100%.
Re:Sound premises. Sound reasoning. Wrong conclusi (Score:2)
Yes, there were lots of things they could have done -- like define a subset of the original committee-designed bloated specification, get that working, then start adding features. But the manager (Fred Brooks) didn't know that, yet, and didn't even know the project was in trouble until it was impossible to deliver anything at all on deadline. Afterwards, he wrote a book, The Mythical Man-Month, which has become a standard text for large-project management. But he learned how by doing it wrong, more massively than anyone ever had before...
You're missing two premises (Score:5, Interesting)
Re:You're missing two premises (Score:2)
Buy from us! Our stuff looks pretty.
No theory in Software Engineering? (Score:2, Insightful)
"In software engineering there is no theory,"
I don't buy that... at least not completely. I would say something more like, "In software engineering, theory is extremely underutilized."
I believe there are many instances of engineered software, but not necessarily high-profile stuff. A lot of DoD conscripted code may never the the civilian light of day, but there are procedures and documentation requirements that, flawed or not, enforce certain practices. Can we call that "theory"? Anyhow, defense suppliers can afford the extra development time, 'cause the government is forking over big bucks for the code to right.
For the mainstream (read desktop) apps, where all the money is, the time to market and feature pressures will continue to suppress even the best "unified theory" of software development.
Re:No theory in Software Engineering? (Score:2)
Software Engineering (Score:2, Informative)
We are only Human. (Score:2, Interesting)
It is questionable how useful purely statistical methods are in these situation.
One thing I would be interested in knowing is how staff turnover effects development. For matainable software to be possible a consistent approach must be maintained on adding new functionality, this usually requires deep understanding of alrge code base, and if your programmers keep changing, the newbies may not follow the rules.
Re:We are only Human. (Score:2)
Look at Software Project Dynamics: An Integrated Approach by Tarek Abdel-Hamid (ISBN 0-13-822040-9). In it he builds a model of the software development process and shows many remarkable results. Things like a high turnover rate can completely destroy productivity. He also shows that Brooks' law is a bit simplistic (You CAN add people to a late project, but it has to be done very carefully).
Even though it's a dozen years old, it's still a very good book. It's a shame more people don't know about this. The research, as far as I can tell, still holds up well.
Computing Environment (Score:2)
- The functional capability of the OS too, since new hardware keeps coming out
and the Law of Declining Quality ("The quality of E-type systems will appear to be declining unless they are rigorously adapted, as required, to take into account changes in the operational environment").
Exactly what is happening to windows? And why Linux is so successful -> Open Source like fetchmail et al being more linear in their development, all users get a stab at getting the environment right.
But users who aren't prepared to do any work to make things better in their environment for their PC are always going to lose. But it's the same as those people who make their desks tidy and optimise them for work, and those that don't. The difference on your virtual desktop is that you can't easily hope someone else will tidy it for you...:)
it's all in the design (Score:2, Interesting)
you have to try to map out not only what you will need but what you might need in the future.
yes, it's a near impossible task but it's the only way to avoid automatically commiting yourself to an endless cycle of patches and hacks.
the good part is, if you can plan the project well enough then the actual coding becomes nearly trivial.
the problem arises when the boss says 'i don't care about scalability or flexibility, i just want code now' and i have to try to explaining that i'm trying to save his ass 8 months down the line when clients (and not to mention, the boss himself) bombard us with feature requests, etc.
Re:it's all in the design (Score:4, Insightful)
Not only is this not true, it's impossible to do this in practice. If you do this, you'll find that you still blow a lot of time on design, development takes longer, because your design is unnecessarily abstract, and your design proves inadequate for something that you need to implement further down the road. Requirements change, and this has consequences for the design. The best one can hope for is that the basic architecture is robust enough that it doesn't require a complete upheaval.
What is necessary is a method for changing design gracefully. "Refactoring" is the best source I've seen that addresses this. Basically, you change methodically, and you test.
Re:it's all in the design (Score:2)
design is everything. that is where you try to predict all the problems that might occur.
the off the cuff stuff is just lazyness. if you can work out all the issues, you can then step it into production by mapping out how you solve the solution the rexamine any issues that might come up. once all that is done, you should have a discriptive enough approach that you could hand it to anyone with the ability to write the code you need and have them impliment it. if you ahve a well planed and descriptive design, the coders do not nessisaraly need to have anything to do with planing.
Re:it's all in the design (Score:3, Insightful)
More than that, software vendors routinely write a program, release it, then add features so they can sell it again. It's as if the builder has finished the apartment building, and now they want a factory tacked onto the north side and a Wendy's onto the east. Next year, add a hospital wing to the west. Repeat once a year for 10 years and you get one hell of a mess, but how else would M$ keep a continuing revenue stream from the same OS and Office programs?
Re:it's all in the design (Score:2)
I think you've invented the literary onomatopoeia.
Re:it's all in the design (Score:2)
Predicting problems that might occur is one thing, predicting changing requirements is another. You can reasonablt anticipate problems based on prior experience. But it's difficult to guess at changing requirements, especially when the requirements come from an external source.
Re:it's all in the design (Score:3, Insightful)
I was talking about this with a friend the other day. Wouldn't it be nice if a senior software 'architect' could maintain a unit-level view AND current code at the same time? That way his busy programmers could refactor all they wanted, as long as they didn't overstep their unit bounds but at the same time improve the product. The architect could look at the project at different levels of abstraction (units, subunits) to make sure the programmers aren't getting off track.
Probably the hardest thing about using the iterative or refactoring methodology is knowing what your architecture looks like at any given time. You design a great, flexible architecture for the first iteration, but after several rewrites you may not know where you are in terms of the big picture. Surely a tool that spits out UML-like diagrams of the current code would be very useful to spot architecture flaws introduced during the refactoring process. Effective use of design patterns may also help. Is it impossible?
I've seen some work done by Rational in terms of code generation with
Re:it's all in the design (Score:2)
Like everything, reading UML takes practise. UML diagrams can sometimes be very dense, and it takes experience to extract all of the information and process it. It isn't "automatically intuitive", but at least you get a very large chunk of architecture on one page where you can see it all at once.
Compare this job to scanning a few thousand lines of code and it will probably change the minds of people that don't think diagrams are helpful. Maybe their jobs don't require a big-picture view and they can live in their unit or subunit and not care - the architecture diagrams are mainly for the architect or project lead, to keep the project on track.
Also, the project leads need to sell the idea of seeing the forest through the trees to their developers. If the developers have an idea of the big picture, they can make more educated choices for the project without much intervention from the lead. If the lead is constantly jumping in and telling the developer he's going in the wrong direction, you can just imagine how the developer feels. On the other hand, if the lead explains his decisions and gives answers to "why" questions, the developer will stay on track (and the lead will spend less time correcting the developer). The lead having a firm grip on the architecture at any given time is important for this reason.
As for Martin Fowler's comments; for normal projects, I don't buy it and it's a moot point anyway. If someone wants to shun UML and spend time shifting through code to see the design, let them. They are just shooting themselves in the foot, time-wise. If the reason they are not using the UML is because it is outdated (which is likely) then that's a totally different reason - I don't like using outdated information either. If the code and the UML are sync'd, there's no difference except the amount of time it *might* take someone to find the design and digest it (not knowing it previously). UML was made to speed up this process. If there are two reliable sources of information, the person is free to make their own choice.
Fowler is also advocating not using UML for the sake of XP, which is valid. XP is a developer-centric methodology. A developer sees a bad part of code and goes in and corrects it. Unfortunately, the "design" changes so much it's hard to get a snapshot. That, coupled with the fact that there is typically one lead on a project that needs to know what's going on from a Software Engineering perspective. If he has 6 programmers going nuts on his code, he'd like to know where they are making bad turns today instead of in 2 months during a code review. I can buy not using UML during XP development, but that's only because (as I see it) refactoring focuses on such a small, managable area of the code that you don't need to look at a diagram to figure it out. However, even in XP an architecture view would be handy, if only for the lead.
Whew, a little off topic there. Thanks for bearing with me. Just wanted to start up more discussion, if you are game.
Re:it's all in the design (Score:2)
That, coupled with the fact that there is typically one lead on a project that needs to know what's going on from a Software Engineering perspective. If he has 6 programmers going nuts on his code, he'd like to know where they are making bad turns today instead of in 2 months during a code review.
In the article linked above, Martin Fowler says:
Clearly the emphasis is on training your programmers and then trusting them, instead of "police-ing" them by always looking at an up-to-date UML diagram. This I completely agree with. Not only are you encouraging consistency thoughout the team (also enabled by pair programming) but you are increasing the team's worth as well, reducing the all-the-eggs-in-one-basket effect.
I still think that having the large architecture view could help you educate your programmers and spot bad designs. Pairing them with more experienced developers could remedy this a bit, but it may not always be possible. Your junior programmers aren't going to start out knowing all of your wisdom. You have to be able to spot their poor design decisions early and tell them before they propogate more poor design thoughout the codebase, right?
Re:it's all in the design (Score:2)
I'm not saying you can't or shouldn't do this. I'm saying that doing this alone will not solve all of your problems, and even if you do up-front design, you still won't succesfully anticipate all design needs.
Basically, the more central a component, the more important to get it right. A component that is tightly coupled with your entire system (for example, the base class of a broad and deep heirarchy) is almost impossible to change gracefully, so you'd better design it.
On the other hand, implementing a button as a two level heirarchy with an abstract class is pointless, because you'll probably find the initial design breaks when you add a new button, because the abstraction serves no immediate purpose, and because at present, the button class is not tightly coupled with your other code.
Re:it's all in the design (Score:2)
I agree, but the poster I responded to suggested 99% of the time should be design. IMO, that exceeds "appropriate" and it's what I'd consider a futile attempt to pump more resources into something that offers diminishing returns.
The answer is modularization (Score:2)
I think one of the reasons that Linux has been so successful is because Linus decided long ago to take a modular approach to designing his monolithic kernel.
-josh
No Interest in 'Doing It Right' (Score:3, Insightful)
Our firm licenced this software to major manufacturing firms with a Money Back Guarantee. As in, "If you are not satisfied, for any reason, we will either fix the problem or give you back your money. Your choice." We were never asked for a refund.
It was semi-open source. You could have the source any time you wanted, but asking for the source voided your warranty, since problems in your data might have been caused by your own temporary code changes.
Funny thing. I've had that on my resume for many years, but no prospective employer has ever asked how I did it.
No one has hired me specifically to help them produce similar quality code. Much of the time their reaction to my resume is, 'but you don't know c++' (or their other favorite). I know enough about c++ to know that I want to stay away from that second generation language for all but the most specialized situations.
I have also been told, on numerous occasions, that I'm not qualified to lead a particular project because I lack experience mannaging the large team that will be needed. I've never gained that experience because I've never needed a large team to accomplish anything.
As an MBA, as well as being an application designer & a coder, I know that large teams do have a place -- mostly where you have a blank cheque and are earning a percentage of the total billing. (:-)
Right on! (Score:2)
You're absolutely right about this. I'm another semi-old-timer. In the early 1980's, I was on the team (six people, all with developer background) to write a bisynchronous communications package (HASP station emulator). We had a standing offer--anybody who could find a bug would get a free dinner at any restaurant. We only had to pay off once.
Nobody seems to care about doing this anymore, or maybe they never did in the first place, and we were all just naive.
Software craftsmanship (Score:4, Interesting)
A lot of the dire predictions of software atrophy and such are a result of applying the wrong methodology to a project. Yes there are uses for Software engineering, but I think this approach is overkill for even large scale projects. Check out Software Craftsmanship: The New Imperative [barnesandnoble.com] for a different perspective. A perspective I think is in need of serious consideration. The gist is returning to the days of master craftsman and apprenticeships. This focuses a bit more on the learning aspect than actual development methodologies, but you can always go to The Pragmatic Programmer [barnesandnoble.com] to fill in that gap.
"As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress."
This is where "refactoring" (see Fowler's Refactoring [barnesandnoble.com]) really shines. I find it difficult to believe that refining the software base is not progress. An initial revision where the code functions by its contract (if your into designing by contract), then you refactor the body of the function/method for speed / elegance. Then you can run your unit tests on the function / method to test that the refactoring session did not break any of the design contracts (whew).
I think they may be trying to restate the broken window theory (see Pragmatic Programmer), were a broken window (or bug) in a building (or system) leads to delapidation elsewhere in the building (or system).
And then there are the agile methods [agilemanifesto.org], including XP [extremeprogramming.org]. I think these answer a lot of the limitations and issues with Software Engineering practices. Interacting with clients (having a client there during each iteration) gives you the benefit of almost real-time feedback so that you can update your user stories on the fly, etc.
Without rambling on any farther, my point is not too spend too much time looking for a specific unified theory. Read up about all the ideas, methods, and theories. Take the best parts from each, then crank the knob all the way up (if I may borrow that from XP =] ). Don't let anyone tell you there is a science to software development that is easy to reproduce, and that you are just a link in the overall chain. You practice and perform a craft. Enjoy it!
Refactoring and Rewriting (Score:2, Interesting)
We had a case were a system no longer proved ameniable to feature addition or continual improvement to match the changing operational and customer requirements. In the end the benefits of refactoring the codebase to match the changing production requirements were more costly than to rewrite the system using more modern libraries, methodologies and frameworks. It got rewritten and the old system phased out.
It wasnt a case of "fixing" inherently broken software, it worked perfectly well, just the operational flow it supported changed due to new customers and more efficient management procedures.
Incidentally we have found with each major rewrite of that system ( there has been two ) there has been an immediate growth spurt in customers. I am not sure if it is because it looks like something new, or that the software better matches the operational requirements or because of increasing feature addition. Either way the last two rewrites have been paid for almost immediately by the addition customers the new software has brought in.
mocom--
Re:Refactoring and Rewriting (Score:2, Interesting)
Your requirements (features and design) outgrow the current application and warrant the need for a new application that encompasses the old application's functionality as well as it's name. So really you have two seperate applications that share functionality and name.
But isn't that in itself refactoring? Rewriting code, keeping the functionality of the original while improving the internals?
Re:Refactoring and Rewriting (Score:3, Insightful)
Growing geometrically? (Score:4, Interesting)
From the article:
Is fetchmail [tuxedo.org] complex enough that it needs to be growing geometrically? I mean yeah, fetchmail does a lot, and I do know what "geometric" means. Still, I doubt the world of email is changing fast enough that you'd want to choose that as your example of out-of-control software maintenance.
[Insert obligatory ESR goading.]
Height of evolution. (Score:2, Funny)
Where I work, it has been a commonly held belief that all software evolves until such time as it can send and receive email. If it doesn't do this, it isn't complete. :)
Jason PollockRe:Wow a new methodology (Score:1)
Software Development is an art, it cannot be done by all people. Every succesful developer has his own way to evolve the software and give it powers, and no one can write a book or article on how to evolve the software to gurantee it will work better than its competitor.
Re:Wow a new methodology (Score:1)
Maybe - but trying telling that to some of my upper CS Professors around here....