Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

Feature:Fear of X.0

David Ishee has written a piece on the Fear of X.0 where he talks about (surprise!) release versions of software. It's worth a gander...
The following is a feature written by Slashdot Reader David Ishee

I'd like to offer an observation of the software industry and the often stated fears of a new version of software.

The idea jumped at me after reading the Linux Weekly News site and seeing a reference to an Info World article called: "Analysts at GigaWorld say skip NT 5.0"

To quote from the article:

"Analysts here at GigaWorld IT Forum '98 advised attendees to forgo Windows NT 5.0 and wait for a later release.

With 30 million lines of code, 85 percent of which is new, Windows NT 5.0 is likely to be buggy, said Rob Enderle, director of desktop and mobile technology at Giga Information Group, last week.

"It's too complex and too new," Enderle said. "Even inside Microsoft, there's a realization that the product won't ramp to volume until NT 6.0 because of the fear of initial releases."

Enderle advised waiting until service pack 3, or NT 5.5, which could be out a year or so after NT 5.0."

I have seen this type of attitude expressed in the press and by people on the net before about various software projects.

One project that stands out in glaring contrast is the GIMP. The GIMP just went 1.0 and if I'm correct, many people were eagerly awaiting the release and confident of its stability and usefulness.

Why is that?

The main difference in the development of Windows NT and the development of GIMP is the open source philosophy of "release early, release often" as expressed in the Cathedral and Bazaar paper that has gotten so much attention recently.

I've used GIMP 0.54, and various 0.99.X releases (even submitted a couple of bug reports) and I could see and follow the development, the improvements, and the increases in stability as many others probably did as well. When version 1.0 hit the net, there was no fear of the X.0 release. I knew it was going to be great because I had participated in the development by trying it out at the various stages. How many times was a new release posted to Slashdot with the hope that "this was the last version before 1.0?" Why did these last few releases occur? Obviously there was a few things found that had to be ironed out before it was declared ready for prime time.

Contrast this approach with new releases of Windows, or any proprietary software. You don't get to participate in the same manner. Sure, there are beta releases that come out (like with the Win95 pre-releases), but they are spaced much wider apart compared to GIMP releases, and not nearly as many releases occur. More importantly, you never get to test out the last version before X.0 where the software is released once more to make sure it can be declared done. You may see a few betas, but the changes to the last beta and version X.0 is likely to be significant.

There are probably many reasons that I'm not aware of about why people like Microsoft only push a few betas out the door (such as the pressure from marketing to get it out in time for the Christmas shopping season, or whatever).

The effect achieved by Microsoft (and probably others too) is that version X.0 is really just another beta release that we have to pay for and hope the next version (or service pack) gets the bugs fixed while not introducing others (I remember keeping up with the service packs for OS/2 before my Linux conversion).

The confidence the user gets from the "release early, release often" method is powerful. It makes me more confident in the 2.2 kernel knowing that we are past version 2.1.100+ in the development branch even though I haven't tried any development kernels.

The "release early" part of the equation can easily be used by proprietary vendors, but can "release often" as experienced in the open source world be duplicated also? I'm not sure. The common experience seems to support the theory that the large complex software systems being built today like desktop environments, operating systems, and the like are so hard to test thoroughly by a finite number of developers in one company that the additional help from potential users on the net and around the world are needed to test every permutation of the software's functionality and fix the bugs to be able to "release often".

So far, it appears that only the open source world has embraced the "release early, release often" philosophy (or created it?) and been able to implement it well enough to capitalize on the confidence to be gained in version X.0 by active participation by the prospective users.

We have all heard the skepticism that companies can't make money (or at least LOTS of money) from open source methods. User confidence in the quality of your software provides you with a powerful marketing tool. One tried and true way of getting that confidence is to use open source software. Everything seems to add up to the conclusion that open source software is an advantage, not a disadvantage. Then again, maybe I'm just a nutcase and these two examples are not representative. While no methodology is likely to be a "once size fits all", maybe open source is at least a "this size fits better". You decide.

This discussion has been archived. No new comments can be posted.

Feature:Fear of X.0

Comments Filter:

PURGE COMPLETE.

Working...