Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
News

Why Software Still Sucks 379

atlantageek writes "The man who brought you (or somebody else) virtual reality speaks to upside about the sorry state of Software." "The Guy" is Jaron Lanier, and the article covers a fair amount of stuff. This isn't a super smart technical article, but its got a lot of interesting comments on subjects ranging from software, Linux, open source, Unix, DeCSS, and more.
This discussion has been archived. No new comments can be posted.

Why Software Still Sucks

Comments Filter:
  • The thing about software is that you, dear reader, can create it. We should _expect_ that fact, the fact that anyone can do it, to cause exactly the chaos and prevalence of mediocrity and plain badness that is being complained about here.

    Consider the huge difference between software "products" and other machine/tool-like things. Suppose you want to make something--say, a can opener. Most of us understand, in principle, how a can opener works. I think most of us can look at any given can opener (they're pretty much open source, after all) and understand its design. But what would it take to create one ourselves? Lots of money, basically. Not-commonly-available tools, parts or the tools to make them, space to put all of that stuff. Who's going to do that in their garage? Maybe some person with a vision of the perfect can opener and dreams of changing the way everyone opens cans? But, of all the people that dream of changing the way everyone opens cans (already a tiny number of people, I would guess), how many of them could even afford it, or could convince someone to fund it?

    It's just not likely to happen. So can openers, by and large, are going to be designed only by companies with access to the materials and tools _and_ a bottom line to worry about--meaning that they're not going to do it half-assed.

    Now switch your brain to the world of software tools, and instead of a can opener, think of a recipe database. Heck, I wrote one one day, and I thought the other day about web enabling it so my family could share recipes over the web. It doesn't, as has often been observed, mean that I _should_--but I can, I know that I can, and all it's going to take is a little time (but "little" is, of course, _very_ deceptive). The tools are right in front of me--except for the hardware, that you could probably get for almost free, everything is free. All it takes is time. And I have (at least some) time. So I can. So I might. And so, many people do.

    Okay, that explains a lot of the crap that's out there, but what about the commercial stuff that's also crap?

    A similar thing happens at software companies--although the perception is not correct, they _think_ they can. That same "we have the tools, we just need to write the code" mentality is alive and well at software companies. It's relatively easy for an engineer to say "okay, metal gears of this size and spec will cost this much, but it will be this much if you want plastic. if we eliminate this button we save X dollars and get rid of some possible breakage issues which we rate as an X% probability in normal use". Quick, how much does a javascript function to validate input on a form cost? There's no easy way to answer that, no book to look up the numbers in. Because "all" it takes is for someone to sit down and write those lines out, and because, once those lines are written, producing the next copy of them costs nothing, designers are led down a very deceptive path to a hugely complicated structure that quickly becomes hard to understand, hard to coordinate the interactions of the different parts of, hard to test thoroughly, and hard to deliver on time.

    Once you get into that situation, you have to cut corners, slap stuff together, do the absolutely necessary stuff, and suddenly you're plagued with a poor user interface (because that takes time, understanding, and developer buy-in to get right), ugly bugs (because you didn't have time to develop, much less run through, an exhaustive set of test cases), etc, etc. And then is when you come up against the wall, and you find out that, while all this time you were thinking out oculd, because the tools were right there, you actually _can't_, because you _don't_ have enough time.

    Basically, the feature sirens are always calling us to the rocks, and the song is so beautiful ("We could make it compatible with ...") that we find it hard to resist. And all this time it just all seems so _possible_. In a theoretical sense, it is--but the razorlike reefs are right there under the waves. Even when we are going down we often don't see why we failed.


    --
  • Which might be easier, saying "move the mauve sofa 3 feet from the south wall and 2 feet from the east wall. Place the couch facing north, with a 30 degree angle from the east wall," or "Put that {point} there {point} and aim it towards there {point}. So, the command line is not the most natural way to interact with a computer for all cases.

    Of course, you really want to be saying to a computer "put that file somewhere around my home directory" or "Format that hard drive, make it about a third of the drive" or "Cad program, make that sewer pipe about this long" (holds hands apart).

    Computers are an exact thing, they need to be handled in an exact way. The command line provides that exact way. The number of times I've been using Corel Draw and wished I could just type in "box, 2in x 1in" rather than have to click and fiddle with the mouse, having it jump 0.1" bigger when I release the mouse button. Angles in circles are even worse.

    Rich

  • by Sloppy ( 14984 ) on Wednesday December 13, 2000 @07:20AM (#562627) Homepage Journal

    You can't just slap any arbitrary user interface onto Unix, because Unix dictates its inner self onto all layers that ride atop it.

    Ever seen OS/2? OS/2 is butt-ugly. It's config.sys makes DOS/Windows look elegant by comparison. But you would never guess this from the UI, which puts all other UIs to shame. How did they do it? They put a powerful layer in between: SOM.

    The same sort of thing can be done with Unix, it's just that no one has. Oh wait, someone has: Apple has done it with Aqua. A very different approach than OS/2, but still, the UI is completely divorced from the inner workings. I haven't used Aqua yet, but I know that when I try it, my reaction is not going to be, "This is Unix. I know this."

    The open source movement hasn't done anything like this yet, because they are too busy trying to compete with Windows. As long they keep looking to the worst UIs for inspiration (e.g. MS Windows Explorer) and try to infiltrate the corporate world by blending in (e.g. Miguel's mailer that looks just like MS Outlook), that's just how it's going to be. But it doesn't have to be like that. The GNOME/KDE guys have set their sights very low.

    "I think you could have open source movements that, through a peer review system, adhere to the strictest, most anal-retentive quality control program as well as open source projects that function in a perpetual state of redesign. Both are entirely doable."

    Could have? That sounds like present-day OpenBSD and Linux, respectively.

    As for my explanation as to why software sucks, I think it's pretty simple: the economy has judged that software that sucks but is cheap, is preferable to software that doesn't suck but is expensive. Quality takes time, and time is money. I write software-that-sucks for a living, and whenever I give a customer a choice between something that sucks for 8 hours or something that sucks less for 16 hours, they almost always take the cheaper choice.


    ---
  • > Also, I wish that Linux could have a much smaller text editor

    -rwxr-xr-x 1 root root 158740 May 1 1999 /usr/bin/pico
    -rwxr-xr-x 1 root root 180484 May 1 1999 /usr/bin/joe
    -rwxr-xr-x 1 root root 256972 May 1 1999 /usr/bin/jed
    -rwxr-xr-x 1 root root 503832 May 1 1999 /usr/bin/vim

    Any of those small enough? How about

    -rwxr-xr-x 1 root root 47656 Apr 30 1999 /usr/bin/ed

  • The problem here is the fact that a weird OS like Linux is effectively bringing the Unix-world back like a plague that won't be exterminated.

    *giggle* 'snort' heheI... BAAAAAAAAhahahahahahah 'snof' hehhe.. What, um.. makes you think it ever disapeared?

  • This gives me yet another opportunity to hop up on my soapbox and recommend you read The Forum on Risks to the Public in Computers and Related Systems [ncl.ac.uk], also available on the Usenet News as comp.risks [comp.risks]

    While it's probably the case that software today is too complex to get all the bugs out, the situation could be vastly better than it currently is. There is a lot of discussion of current software problems and what to do about them in Risks.

    It is frequented by many esteemed experts in software architecture, reliability, security as well as just plain folks. The bug anecdotes reported there are often pretty funny but sometimes frightening and sobering too.

    I think anyone who works with computers in any substantial way should read Risks.


    Michael D. Crawford
    GoingWare Inc

  • Heck, I was born here in the USA, and I've been trying to speak English for ~23 years now (some would say that Americans can never learn proper English, but that ain't right ;-) So, it took me (let's say) 10-12 years to master English, but I can "Learn Perl in 21 Days" and "Learn C in 7 Days"... or, if you want to be picky, a good semester course in Models of Computation could have one fairly fluent in C, Lex, Yacc, and a few others in a fairly short time. Programming / scripting lanuages are more direct and precise, since computers aren't very adept at guessing what we really mean if we are ambiguous...

    I think I had a point, but I must have forgotten it by now... shoot, it probably just boils down to a "Me Too!" post by now.
    --
  • I once worked on a project run by the biggest fool I've ever worked for. Testing ran to a schedule as follows:

    Meeting at the beginning of the week to determine which new features/bug fixes would get implemented that week.

    Three days of programming, followed by a cut to CD-ROM at the end of the third day. This cut could be at any time, once it occurred after midnight.

    On Friday test everything to make sure it worked, including 100% components that had not even been touched in the previous month. Ship to customer no matter what was found, but report bugs at the following Monday meeting.

    No wonder the software never worked properly. Problems were being introduced due to tired developers. Not enough time was spent squashing some really evil bugs. And one developer was so utterly hopeless he should have been fired.

  • I don't understand that one. I never want to sit in front of my computer and hope it works this time.

    Understandable. Perhaps it's more easily understood another way: How much time, effort, and money are you willing to put into the reliability of this app? Is 99% enough (quick and dirty hack), 99.9 (some actual engineering), 99.99999 (detailed regression analysis of every posable input and verification that the output is correct, multiple redundant everything, years of work and still not 100%).

    The answer depends on what the software does. If it maintains your life support it MIGHT be in the many nines category or you might decide it's better to go for 99.9 and provide manual controls (the Mir approach). It might be software to fly a negative stability jet that cannot be controled manually at all. In that case, the 99.9 solution won't work.

    If it's just a 1 time need to process some text file, 99% is overkill. Just bang it out in awk, sed, or perl and call it good.

    I don't want my browser to ever mis-format anything or crash, but I'm not willing to pay $1000 for that.

  • What I found most interesting about this article (and I have read the original Edge paper, too) is that Jason Lanier believes that Unix's "Command Line" somehow pervades the whole operating system, forcing a structure and perhaps attitude onto all higher layers of the the operating system. Needless to say, he doesn't like that structure much (although he probably understands the Unix CLI much better than most people here.)

    But nonetheless, I think he's wrong. I can give two counterexamples that show that the Unix command line structure does NOT force a "Unix-y" structure in a GUI:
    1. OS-X (Macintosh Aqua on BSD).
    2. Quake III (Easy to use UI that looks and works the same on Windows, Linux, and Macintosh.)

    I think the real straitjacket that UI designers on Linux constantly have to fight against is X Windows, which is my favorite candidate for "What's Worst About Unix". I don't want to rant about that right now. I have a different rant.

    I would like to see Linux innovate more, and I agree with Jason Lanier that some real user interface capabilities do depend on the lower level properties of the system. For example, both Windows and Linux both essentially understand the "type" of a file by looking at the last few letters of the file name: ".exe" or ".tar.gz". At least Linux has a separate attribute for "executable", but still, this is a pretty pathetic way to identify files. A step in the right direction would be to store a little extra information in the file system, like the mime-type of the file, and the name of the program that created it. User interfaces could put that information to really good use.

    Given a directory full of pictures, some with the extension ".jpg", and some with ".png", at least the user interface can guess that they are all graphics. But what if you put a new file in the directory and mistype the name, calling it "pgn". Well then, it won't show up in dialog boxes that are looking for graphics files, and when you double-click on it in a file manager, who knows what will happen? Mime-types in the file system would fix this.

    Another example: Suppose I have one collection of really huge .png files, raw 600-dpi scans that are 100MB each. And then I have another directory of .png files that are all 256 x 256 textures for a 3-D game.

    Right now, if I double-click on these in a file manager the same program will be used to open both of them. But I might want to use different programs to view and edit these files. Having to open the program first and then hit the File,Open,Browse, view another list, select the file, etc. when I was just looking at it in the file manager is stupid.

    So that's one example of how Linux could innovate. Of course Macintosh fans are snickering right now. I suppose "Resource Forks" can do this sort of thing. I wonder how those are implemented on the OS-X file system?

    So how about we put mime-types and other useful data into one of the new filesystems - ReiserFS, Ext3... Journaling is not the only improvement to file systems worth doing.


    Torrey Hoffman (Azog)
  • A clearly defined and documented interface provides the integral part of any software project. With out them all the diagrams and quality control will not help when it comes to maintenance and new development.

    Ironically, Linux gets an "F" in this regard, or at best a "D", and Windows gets an "A". The API documentation in Linux is effectively absent. I know I'd be a whole lot more productive if I had as good documentation as what Windows programmers take for granted.
    --

  • hardware is designed ... tested, built upon, etc.

    I'm not so sure the hardware realm is as pure as it seems. I'm far from a hardware designer, but I've perused some of the Linux kernel source and have seen many comments about broken hardware designs. Of course, the obvious examples are probably the various Pentium bugs (FDIV, FOOF, etc.) but I was initially thinking of my (t)rusty Intel etherexpress board.

    Maybe it doesn't matter to anyone other than driver writers, since performance is improving.

    In other words, bad design or implementation impacts users of the interface that an object provides. So a crappy piece of hardware gives headaches to driver writers, but a buggy word processor gives headaches to end users (and their IT support staff...)

  • That's really odd. When I installed Red Hat 6.2 on my system at home, it did all that for me. I did have to tell it what make and model monitor I had, but it figured out the rest and came up with X the first time.

  • And if it's running windows, you get an extra "Blue-Screened Monitor Paperweight" at no additional charge!
  • I think that you are right about the design issue. When we are taught computer science, that is stressed very heavily, and you spend most of your time designing software, and working out logic, flow of information, data structures, etc...
    But that is not how the software that _people use_ is built. Many companies have discovered that by having a flashy new feature or a higher version number out on the market _faster_ they will attract users like flies to sh*t. The problem is that designing software takes time, and users are antsy and want it now, so they put up with shoddy software (and even hardware) just so they can get the latest feature. I tend to prefer designed programs that i don't have to upgrade every 5 minutes, but then as a programmer i have a skewed view. If i were a normal consumer sort of person, i'd be going ape about the fact that i'm still running a browser that's a version old, and i'm still using a UNIX variant that's built on "30 year old technology". UNIX was designed, people sat down and thought it out before writing it, and that is the only reason it's held up so well; It still solves the problems it was designed to solve. Most people who are constantly cusring UNIX are just not aware that they are asking it to solve problems that would be better solved by some other OS that fits their needs.
    That being said, software will get better when the average user demands it. The computing community already knows how to make software that works reliably, stays useful, consistant, and solves the problems it was designed to solve. The problem is that users keep buying software that doesn't work. This may be because they don't know or can't express what problems they really want the computer to solve. The author even states that a lot of his frustration with UNIX (as a good example piece of software) comes from trying to implement VR under UNIX. The trick is that the operating system under a VR rig needs to solve a very different set of problems than the operaying system used for a time-share application server. VR wants realtime, prioritized execution, lots of small asynchronous tasks, wheras UNIX solves the problems it was designed to solve (sharing resources fairly among users, networking, and being a portable base upon which to write software for computational taks). This may be a communication issue between users and developers, or it could be an artifact of agressive marketing and the need to keep users running the upgrade tredmill to keep the bottom line up. In either case, i think that the main problem with software is a human communication issue, from what i've observed.
  • It seems to me, hardware is engineered better (or just simply *more*), because the barrier to entry is high. You don't just create a graphics chip by typing at a keyboard. You need a logical design. Written/drawn down, entered through some circuit construction program. You need real physical materials. You need to undergo an expensive process to make those materials into a physical embodiment of your logic. If you screw up, well, back to the drawing board.

    Software on the other hand has a very low barrier to entry. Got a compiler? Got an editor? Great. You're now a software "engineer"! Tappity tap. Hello world. Tappity tap. Hello enterprise. Tappity tap. Hello critical infrastructure. It is simply too easy to "just" fix this, or change that. What if every civil engineer decided that they would "just" tweak this or that parameter and see what happens? Or maybe they'd "just" fix the one faulty bolt. It doesn't happen (plus, a major advance of the industrial age was standardized building components, which software still has very little of). Software engineers, managers, and the industry, need to think of the bytes that make up their software as important as an actual constructed, engineered structure, that one shouldn't make whimsical changes or additions to without much consideration and peer review. Because literally, badly designed software ends up costing just as much, or more, than real physical structures.
  • ...this guy is a great example.

    First, the usual false statements ("Ultimately UNIX is a command line system", "UNIX was so primitive in the 70's compared to other systems...")

    "CPUs are so ideal...software is such a lagger" If that's true, go build a corporate web page in one day with nothing but transistors, ass. It's been done with development tools on the market today.

    Ahem, many UNIX library calls have no CLI influence whatsoever...they existed BEFORE the shell!

    Primitive to what systems? DOS? DOS disn't exist in the seventies...digitals' RTX? Quite like DOS really. The call infrastructure is cleaner under UNIX according to most old timers I've talked to.

    Windows2000? I've used the win32 api...golly, many of those system calls look like they were lifted right from Unix. Just what OS is so superior to Unix?

    What a bunch of fanboy crap. Here's my official response: "Wannabe dreadlock-hairstyled, artificially-color-enhanced-contact-wearing musicians who hang out in tribeca coffee shops yapping about how much software they didn't write sucks...should be shot for being the asses they are"

    Anyone wanna refute that?

  • I have written a proposal for an organized linux QA effort [goingware.com] that will center around a bug database that is accessed via the web.

    I think actually the kernel is of very high quality, compared to competing operating systems like the windows kernel or the Mac OS system, but it does take a long time for the kernel to get the kinks worked out.

    I realized when I got onto the Linux-Kernel mailing list to report a bug in 2.4.0-test a few months ago that people who were less hardcore developers would probably be put off by the process of reporting bugs to the kernel, and suggested this database to the kernel mailing list. It was generally well received.

    In the long run, I'd like all of Free Software to be more rigorously tested for quality. This is just a start - but the database source, and parts of the database itself could be reused for other projects.

    If you'd like to participate, please email me at crawford@goingware.com [mailto]


    Michael D. Crawford
    GoingWare Inc

  • by British ( 51765 )
    So I take it there's more freelance hobbyist programmers out there but zero freelance hobbyist QA people? Darn. Wonder if I could make some money doing freelance testing of other people's open source work.
  • Excuse me?

    Your list of questions applies to *any* engineering field. You need to ask the same questions if you're building a bridge. Does this mean that we should try to keep the field of "engineering" from fragmenting? Sorry; it already has.

    "Software engineering" (somewhat of an oxymoron, IMHO) will fragment, just like any other field, as the skill sets needed for the various aspects of programming drift apart. Would you want a pacemaker designed by a video game programmer? Would you want a video game designed to medical implant standards?

    BTW, your last question is not really a matter of engineering, software or otherwise. How does a pacemaker "decrease the amount of repetitive work that humans have to do"?

    --
  • Ask emulator authors how complex a processor is. While marketing departments talk up the pipelines and cache aspect, it's not very complex at all. It's just a big state machine. Software implementations of processors are very good. But who wants just a processor?

    The point is, hardware is split up into discrete packages. Your CD-ROM drive cannot, no matter how much it wants to, write directly into your processor's cache control bits. You cannot edit the value stored in the 13th pipeline status area. They are protected! In hardware, you cannot break the API. It's pins on a chip, there's no other way in to the silicon!

    But in software, you can access everything about the API, and 'hack' it. Bleah. Software, to be as safe as hardware, needs full Java-like encapsulation, where you cannot poke inside the box at all.

    Furthermore, there are far more varied software components depending on each other than you would ever find in a hardware device. Imagine a desktop computer running a web browser and word-processor. There are thousands of individual widgets in both programs. One requires a TCP/IP stack and network driver underneath that. Both require user-interface APIs which in turn require graphics APIs and input APIs, which in turn require drivers. Just imagine a hardware device with 500 Pentium 4, 120 StrongARMs, 50 Z80s, 20 6502s, 100 MC68000s, 12 different bus protocols, hundreds of ways to access the hardware devices (forget those ATAPI, IDE and SCSI standards...)

    In short, even the simplest looking computer system is a nightmare under the hood. Even if I simplified all _my_ code, I would still have far more code running from _other_ people that I cannot change at all. That's why it sucks. Hardware guys have it easy.
  • This reminds me of the comments you always see here on any KDE or Gnome story. "They're just reimplementing the same ideas. I want to see a completely new and better GUI."

    Generally I frown on "Oh yeah? Write it yourself!" responses but in this case they're appropriate. You think people are opposed to the idea of a new paradigm that makes life perfect? It's just a lot harder to come up with them than it is to sneer at and berate people for doing things the old way.

    And having Joan Baez endorse Jaron Lanier's views on software engineering doesn't make me take him more seriously. On the contrary.

  • by Hard_Code ( 49548 ) on Wednesday December 13, 2000 @07:37AM (#562700)
    Yes, but when I drive my car, do I *TALK* to it? No, I use appropriate controlling devices for the situation: a steering wheel and pedals. Yet my bicycle handles and pedals are very different. As are airplane controls. The point is, there is no one *perfect* interface. CLI is just wrong for some things. As is a particular type of GUI, etc.
  • Both you and your pro-GUI respondents are entirely correct.

    When one wishes to converse/command, one talks or writes.

    When one wishes to wield a tool, one grabs it and manipulates it.

    What is the right way to use a computer? I don't know, but I bet it involves each of them at different times.
  • by UnknownSoldier ( 67820 ) on Wednesday December 13, 2000 @08:23AM (#562702)
    ... is because he is partially correct.

    Most software does suck. (You'll have to forgive me if I tar all software with one brush.) Windows has it's problems, Linux has its problems. Anyone who can't critically analyse the deficiencies in their favorite software is either living in a dream world, a zealot, or blind. (No flames intended.)

    The root of the problem is: bloat.

    Every time we get faster hardware, we build bigger programs. We're doing so much more with the hardware nowadays -- who was doing speech recognition 20 years ago? Heck, we can partially do it in real-time now ! The latest 3D games are certainly a good example of this. Lots of nice eye-candy. *cough* Alice *cough* :)

    The software we are building, is MANY MANY times more complicated then a 740. And the fact that we don't have off-the-shelf standarized components certainly doesn't help.

    Which leads me to my next point:

    The software has SO many code paths, there is NO way we can verify and spend enough time on quality assurance. And there will ALWAYS be bugs in the code we write. That sucks just as much as the bugs do. :-(

    Lastly, how many programs spend and designate time to develop a good user interface. That's also part of the problem - too many programmers (open source/closed source, doesn't matter) couldn't put together a good gui layout if their life depended on it. How many have taken a UI design course? How many are asking management to allocate time in the schedule for "polish." (As a game developer I'm just as guilty. Beta = fix all the important bugs, and ignore the small ones since we don't have time to correct it. Maybe we'll do a patch, if not. Next game to start developing.)

    And I think the greatest tradegy of all, is that people have come to expect computers as being unreliable.

    Why can't we apply and have a "process for engineering high quality software" ? Is code that nebulous? Is it because we keep over-complicating the systems?

    People aren't perfect, which means code isn't either. But it wouldn't hurt if more people took the time to say "how can I write this code, so it's easier to maintain, and something to be proud of, instead of something that is hacked together, works, and would be an embarrasment to show to your peers."

    What are other people's views?
  • > What's the average age of a gung-ho Open Source development team? 21? ... I'm not talking about old standbys like Perl and Apache and so on

    Are you saying that if you filter out the projects where the average age of the developers is older, you are left with an average age of 21 on the unfiltered projects?

    --
  • but it would be interesting to see you create an illustration with a CLI

    pov-ray creates it's images based on an interpreted language (Yes, it's not a CLI as such but neither would you create an image with a graphical shell so I took your allusion to somewhere sane).

    Many times with a technical illustration, I've wished I could create an image with words ("box 0,0,3.5,2;circle 3.5,2,1" would be so much easier than "try and draw a box to the right dimensions with mouse, try and center circle somwhere on the top right corner of the box. Guides are a help but start to fall down with anything complex.

    Rich

  • I've seen solid hardware from several different vendors that almost does exactly what it is supposed to... but even that is usually the third time (or more) that the chip has been put in silicon (whether full A-RITs or just more B-RITs, for those who understand that TLA).

    Hardware, especially in a development cycle, is a moving target - granted, the changes don't happen as quickly as a code rebuild, but often the subtleties are a lot harder to notice...

    --
  • So why, if I may ask, does open source projects suffer from the same problems?

    No, this is not flamebait. This is for real -- although there are a few open source projects out there that really shine, you've to admit a lot of open source projects are kinda buggy, and seem to suffer from the same problems as proprietary software (although you could argue they are less so because problems are easier to fix when anyone can see the source -- but the problems are still there). And this is for people who are coding because they like it, not because they are under pressure.

    I think there's more to software quality than just pressure from upper management, etc.. Like other posters have said, and like the article said, there must be something more fundamental about software that we still don't quite grasp.

  • by swordgeek ( 112599 ) on Wednesday December 13, 2000 @09:35AM (#562717) Journal
    First of all, "All operating systems" doesn't mean Unix and Win*.

    I've considered this point quite a bit over the years. There is no way of a GUI getting entirely away from the command line as long as the base OS is still based on the command line. None!

    The other side of the coin is that the command line falls naturally into place with a heirarchal file system. As long as we have the one, we'll have the other.

    The Macintosh (of all things!) did it right in many ways. The command line is an extra tool--not the base level of the operating system. The Mac GUI _is_ the OS, to a greater degree than any other system out there.

  • Cruising at +1, and on this topic, I'm a little surprised to find that this was the only reference to Ada.

    Ada gets a bad rap for it's roots in government, its verbosity, and the rather straitjacket nature of strict typing. But guess what... it wasn't designed for rapid programming, or to make programming easy. Ada was designed for the 20+ year lifetime that code may well have after it's put into use.

    The same things that slow the start of coding help the lifetime and maintenance. It takes the willingness to accept some discipline and take a longer-term view.

    Think of the people who never thought their code would survive to Y2K.
  • OK. But a CD player is called on only to do a small number of tasks. Most cd player apps for computers duplicate a similar interface. I think the issue becomes how you handle more complex stuff.

    Who is to say that a command line driven cd player wouldn't be better (or a better example, a cd jukebox or mp3 player with lots of storage).

    Pushing icons one after another would imho be far more cumbersome than simply typing (or saying)

    mp3> play all songs by Nirvana in random order followed by the predefined mix entitled "punk-party mix", then around 2:30 AM switch to the predifined mix entitled "bach-piano-concertos"

    The challenge is to devise a command line language that is both intuitive and powerful -- such projects as the latin perl project offer some hope in this regard.

    The command line has gotten a very bum rap, primarilly because no thought went into devising a coherent language. Instead cl languages have consisted of a small collection of arbitary commands, akin to the grunts and gestures of our cave dwelling ancesters.

    I find it more than a little ironic that, as computers take on more and more capaabilities, the language the users uses to interact with them becomes more and more dumbed down. I suspect we are going down the path where we replace alphabets with pictograms because pictures are easier for an illiterate to grasp, only to discover that, when that same user later wishes to write a novel, they are forced to use something akin to egyption heiroglyphs in order to communicate their ideas: a writing form vastly more complex and difficult to use than the 26 letter alphabet the illiterate in question should have just learned in the first place.
  • I'm betting you'll find just as many closed-source projects that are as bad, if not worse. Open-source is not a magical cure for all your coding woes, but it does help with a lot of things and has many high points. As you pointed out, Perl and Apache are well-engineered, as are the Linux and/or BSD kernels (dependant on opinion, though) and many other standard tools.

    Yes, there are bad open-source projects started by people who are 20 or 21, but consider how much they'll probably learn in the course of those projects. Especially if some more experienced programmers get interested and start showing them ways to improve their processes and code.


    -RickHunter
  • by toriver ( 11308 ) on Wednesday December 13, 2000 @05:53AM (#562730)
    It appears he hates Unix, not software as such. But he makes the same error as most other who do, he equates newbie unfriendly with user unfriendly. Which are not the same at all.
  • > The vast majority of software is a question of implementation, not algorithmic logic. It seems like it's a lot more complicated to prove implementation than algorithms.

    Actually, it's rather straightforward for a mechanical prover to read the source code.

    > There are currently a very small number of people with both the analytical skills and the severely rational temperment required to do these proofs.

    Much of it can be done by machine. Also note that there once was a very small number of people who could write programs, but now millions can. Why not set out on the same path for the methodology of creating formal proofs?

    > And that correctness (or, more likely, bugginess) lives in the code, and in the code alone, so that's where you have to weed it out.

    Per above, you have proposed a solution to what you thought was the problem.

    > It just seems like it'd be no fun.

    No one said QA was "fun". The question is, do you take enough pride in your product to make sure it's right? Also, per above, mechanical provers is the way to go.

    --
  • And if pico isn't powerful enough give nano [nano-editor.org] a try, it's got the same friendly interface with some added features.
  • by Hard_Code ( 49548 ) on Wednesday December 13, 2000 @07:45AM (#562736)
    I think we just need to realize that 100 amateur programmers writing and reviewing code, ad-hoc, is not necessarily better than 10 professional software engineers working closely together, using some semi-formal procedure for specification, requirements, collaberation, and coding.

    A lot of the most successful open source projects, were originally created by professional programmers (Unix flavors, including BSD; Apache; Mozilla; GNU utilities in general), and still retain some form of regimentation. Linux would not be were it is if Linus et al. let every single developer add their patches in, or modify code, whenever they wanted. Unfortunately this is exactly what happens in many new open source projects under the banner of "freedom"...which ends up producing remarkably mediocre code.
  • by mat catastrophe ( 105256 ) on Wednesday December 13, 2000 @05:56AM (#562739) Homepage
    I read this article, really - I did - and I am not certain at all what Lanier said about software "sucking." At least, not in the sense that we all know about (bloat, fixes, patches, rush jobs, M$ in general, etc)

    what i got out of this was that Lanier thinks we are stuck in some tech-limbo, one where neither the software nor the laws governing the use of software are very useful. well, that and UNIX is old and ugly. Big deal, people still use it, and that doesn't automatically mean that it sucks.

    And his comments about "humanity" not knowing how to make software, or even what software is, were just plain over-dramatic, hippified philosobabble. After all, why is he discussing it then? Is he not a part of humanity? Or, is he better than us mere mortals?
    Plenty of people know what software is and how to make it. The focus is to make it more intuitive, faster, and less prone to Bad Design.

  • Look [slashdot.org] before you leap.
  • I think there's a difference between designing a building and a program. There's also a difference in research that goes into the two. Yes, "software" has only been around for 50 years (well, discounting the pioneers, I suppose). But, the rate at which we learn/innovate/redesign is a bit quicker than it was when engineering was being born.

    I suppose I do agree with the fragmentation argument. But again, software design isn't engineering.

    i think i'll stop rambling now. i just ran out of ideas....

  • Think about Star Trek. How does the computer know when a crew member is talking to it or to another person?

    Duh, it's a story, it's *not real* dummy.

    If you want some semi-plausible explanation, try that as I recall, they actually address the thing "Computer do this"

    And back to your point about instructions in foreign language, you have to remember that the conventions of pictograms are a learned thing as well. There's nothing "natural" about an arrow meaning "this direction" (ask any Floridian) and if you've ever tried to learn something by watching someone do it, if they don't describe what they're doing, it's very easy to get it wrong (e.g. "Insert this rod into this hole but make sure it's the end with the blue mark". If you weren't told, you may not have noticed the blue mark )

    There's a reason that Macdonalds cash registers have pictures of food on it while any serious programmer works in words. And that is because words convey more information in a less ambiguous way.

    Rich

  • Well, I think one of the reasons that software sucks is a disparity of ambition and attention span.

    I often think of software as a branch of practical philosophy -- at its heart, it is simply applied reasoning. The idea that software engineering is the solution to software quality is as flawed as saying that moral or political reasoning just boils down to plain logic. Logic and software engineering are necessary in their spheres but not sufficent.

    A good piece of software is one which is good in many dimensions -- in its data structures, decomposition, architecture and expression, to be sure, but also excellent in other ways:

    * Understands the user's characteristics and motivations
    * Understnads non-users who are affected by the system.
    * Fills a void in the range of available products.
    * Peforms well in terms of stroage, speed or latency.
    * Utilizes sustainable technology (i.e. is reasonably future proofed with respect to platform and interface issues).
    * Has an excellent user interface.
    * Has excellent graphical design where appropriate.
    * Fulfils any ethical obligations it has (e.g. handling of sensitive private data; protection of human lives).

    As an aide, Asimov's Laws of Robotics kind of encapsulate the some of this -- a robot has to follow orders (suitability to task, user interface), a robot may not harm or by inaction allow a person to come to harm (ethical and societal concerns), a robot must protect itself (sustainability and robustness)

    Getting back to the issue at hand, the number of dimensions to be optimized is daunting, and requires time and deep thinking -- the kinds of things that go out the window under a deadline or in hierarchical organizations with pressure from superiors with very limited perspectives.

  • UNIX sucks. It has become the modern equivalent of OS/360 and COBOL. Hot stuff when it was introduced, now it is a dinosaur.

    Perhaps this is the best evidence to back up Lanier's arguments; if software engineering is so damn kewl, why has it failed to produce a credible alternative to these "dinosaur" systems?

    I see a number of problems with the state of the art today, most of them psychological and sociological rather than technological:

    1. The Not Invented Here mentality: there is no algorithm or library so fundamental that some damn weenie won't try to dick with it. Whether this obsessive wheel-reinvention is done for the purposes of vendor lock-in or simply to prove how 31337 the programmer is, is largely immaterial; the point is, programmer-centuries are wasted every year on this. Not to mention the fact that 99.9% of this reinvented code will be inferior to the original due to the programmer's naïveté or bloody-mindedness. The NIH effect also leads to technology holy wars, which if /. is any indicator, probably consumes more potential developer time than anything else. :-)
    2. Overoptimistic estimation: this point was touched upon in The Mythical Man Month [slashdot.org], a book written in the early '70s (coincidentally, by the manager of the OS/360 project). The author makes the point that software is consistently late, yet this does not wake up practitioners to the fact that their estimation methodologies (assuming they have any) are bogus. Software people are incorrigible optimistics, it seems.
    3. The Ship 'Em Shit Road to Riches: the very malleable nature of software allows one to fuck up now, patch later. Never mind that scattershot patching increases the entropy in the code base, making maintenance a nightmare and reducing the internal consistency of the product. It's disgraceful that software companies should be allowed get away with this, but it's certainly not hurt Microsoft's or Oracle's bottom line. How can anyone get a sensible debate going on software quality when the largest companies ignore it and their customers still shovel money into their pockets?

    I'm sure people here can think of more. All of these things contribute to the noise in the software engineering community at large.

  • by Anonymous Coward
    [posting anonymously because I don't want my boss to read this]

    Short of coming up with a new definition, the only hope for improvement is to shove developers' noses in the problem until they finally get the point.

    IMHO, neither of these will help.

    I've worked with some really brilliant programmers in my time, and those people collarboarted to produce some of the crappiest code you can imagine. All of us knew that what we were doing was going to hurt us in the long run, but we did it anyway--not because we wanted to, but because we were told to.

    It's a good developer's natural inclination to take the time to properly design code and not release it until it's ready. Unfortunately, that runs counter to every business and management rule there is.

    If you work for pay, your boss is always going to pressure you into taking shortcuts, putting in quick hacks instead of designing real fixes, skimping on documentation, etc. Their main goal is to put out whatever fire is in front of them before their boss gets called in. That situation is repeated at each higher level, and at each level, it's less and less likely that the person making the decisions knows building software from laying bricks. Ultimately, the main direction of your development process is set by people who aren't concerned at all about the quality of the code.

    When the process sucks, the code will suck. When the code sucks, the quality will suck. But as long as you keep closing bug reports fast enough, your boss will think you're doing a great job.

    IMHO, this is one of the key reasons free (in both senses) software will always drift to a higher quality level than commercial software. Dollars cause bugs. Take the money out of the picture, and the a lot of the incentive to write crap goes away.

    That sounds communist, but I don't mean it that way. I simply mean that we should be prepared to accept the inevitable consequences of the way most software is developed today. If you want to change the state of software, either change the incentive, or (since money is an awfully good incentive), change the way that incentive is given.

  • IMO, the biggest reason for sucky software (and a reason that I haven't seen mentioned yet), is the *lack* of market pressure for quality software. Lets face it, users are at a point of almost expecting their apps to crash.

    Should this come as a surprise when software comes with an EULA which disclaims any wartentee, forbids examination, (unapproved) benchmarking, etc?
    In the US this position is being bolstered by new laws such as UCITA and DMCA.

    Thanks to the lack of robustness of widespead OSes like Win95, the average user has had to deal with enough crashes/inconsistencies and general oddities, that they don't expect much, so thats what alot of companies deliver.

    Whilst Windows may or may not be easy to use it is definitly difficult to administer and fault find...
  • I'd have modded you up, but hey, welcome to Slashdot...

    Although you take your time getting there, your central point explains most of the reliability problems with real software: there are no boundaries imposed on what you can do, other than your own discipline to make it leaner, meaner, and more elegant.

    In fact, the "Great Shame" that software languishes while hardware improves is no surprise at all. The only thing that better hardware does for software is remove the limits to what we might try to get away with.
  • by Trinition ( 114758 ) on Wednesday December 13, 2000 @05:57AM (#562755) Homepage
    I have a feeling softwrae sucks because it is not engineered properly.

    As the author of the article points out, hardware is getting more complex and better. But hardware is designed. It's designed, tested, built upon, etc.

    And while we're all taught that we shoudl use proper design for software, few of us take it whole heartedly. Sure, we might do a few object models, flow charts, etc. But how many of us use propositional logic and predicate calculus to prove our software will work as it is intended? And still further, how many of us have unbiased testing of our software?

    I think the larger the corporation, the more of these practices might be enforced. But it is still up to the programmer to truly adhere and take these to heart. I don't think we quite have the discrete tools that hardware designers have. Nor do we have the worry that once the circuit is printed, you can't easily "fix" it.

  • Garbage.
    I had to use a W98 machine the other day, and if froze. I'd never used W98, and wanted to kill the task which had disappeared up its own arse.
    How do I run the task manager? Or is that "TaskMgr", or "TaskMan", or the other "TaskMan" (NT has 3 executables with that kind of name!).
    Right mouse button click on the Task Bar (ooh, "task" bar - may be there's a task manager on it?) failed to offer me a task manager. Crap. Hmm, is there a keyboard method?
    In NT it's C-A-Del, but in older Windows that rebooted the system. Dare I press C-A-Del?
    Eventually I had to, as there was nothing else I could do. OK it worked, but that was guesssork.
    I don't know if it would work on W95, and hope to never find out.
    The graphical system - being different in each evolution of Windows, completely failed me.

    On a command line Unix system I only need to know 3 things, which have been constant since time immemorial
    a) ps b) kill c) if all else fails - man.

    Which is easier to learn
    "on 3.11 do XYZ, on W98 do ABC with the mouse or PQR with the keyboard, on NT do JKL with the mouse or NMO with the keyboard"
    or
    "ps then kill"

    You've chosen your method. You can have it. I'll chose my own thank you.
    (Haha, 32MB video card, and I spend 99% of the time in the text mode consoles, as they are more efficient for me!)

    FatPhil
    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • by typical geek ( 261980 ) on Wednesday December 13, 2000 @05:57AM (#562760) Homepage
    Lanier pisses me off when he talks about how unnatural UNIX/Linux is with it's focus on the command line. The command line is the most natural way humans communicate.

    When you get home at night, and you MOTAS/roomies ask how your day was, do you
    • A. Draw pictures and icons of how your day went.
    • B. Tell them in words how it went.

    When you have to describe a particular method of doing something, do you

    A. Draw lots of pictures?

    B. Use numbered steps with words?

    I remember a book by an AT&T fellow about data compression and information theory. Two teams had to put together a mechanical device, one team member had the instructions, one had the parts, and they were separated. One team only had an oral link, the other team had video, too. There was no difference in the amount of time needed to build the device, the added video link added nothing.

    HDF does Lanier plan to interact with his computers, pretty pictures? Waht ajoke.

  • > I think we just need to realize that 100 amateur programmers writing and reviewing code, ad-hoc, is not necessarily better than 10 professional software engineers working closely together, using some semi-formal procedure for specification, requirements, collaberation, and coding.

    Agreed. But how many of the COTS products loaded onto the average workstation were developed with that degree of professionalism?

    --
  • Niggly Software
    TCP/IP specs, BIND, ipfilter

    Jiggly Software
    xv,mpeg_play (for viewing pr0n)

    Stupid Software
    VB, Perl (oh, how the flames roll in...:)

    Sexy Software
    AI^H^H CD-ROM^H^H^H^H^H^H Web Brows^H^H^H^H^H^H^H^H^H B2C^H^H^H B2B^H^H^H P2P^H^H^H uhh... Gnutella?

    Pain in the Ass Software
    office integration suites

    Soul-sucking, Time-stealing, My-project-will-be-three-months-late Software
    Everquest, Quake, Half-Life, Tetris, xboing

    Brain-bangingly Tiresome Software
    Oracle

    Vapor Software
    a real GUI for Unix, a command line for Macs, a crash-resistant Windows

  • In my experience, there are only human impediments to successfully creating healthy software. Technologies are sufficiently developed in most languages that most software can be written fairly efficiently using a wealth of frameworks and libraries. So why does software suck?

    Combine time-pressures, market pressures, upper-management pressures, and a lack of training and professional standards, and you have a whole class of employee (the project manager) who has only incentives to lie and hedge, and no incentive to be honest about schedule, feature set, state-of-the-project, internal project problems, etc.

    Assume a project of 15 people, with a 3 million dollar budget, and a project manager leading four teams. That's pretty complex stuff to manage. Now what if it's business critical, and he's getting letters from board-members and C*O staff imparting the import of the project unto him from on high. Can you say pressure cooker?

    Now consider that three developers are fighting, and every teaser this manager has sent up the chain about problems has resulted in a standard "we expect you to solve this on time and on budget" no-help answer.

    Now imagine that some contractor or 3rd party vendor that the architect and project manager had made noises about to upper management lied about their capabilities.

    Now imagine that his buddy frank was fired three months ago after a fiasco project in their other business line.

    Now imagine that he's not vested, but will be vested by the last month of the project.

    Now imagine why this person would ever ever ever say there's a slight problem with the project until it's almost over. Or worse, he'll "two week" the project for months over time, over budget, and they'll release buggy, crappy, untested code to the customer in a beta which amounts to an alpha, expecting the customer to catch and report all the bugs.

    It's no life at all, and certainly no way to get quality software, but it's a scenario I have seen repeatedly over the last few years.
  • In terms of deleting multiple folders, GUI's are far more simple.

    Also, there must be a reason why children are taught the Macintosh GUI in schools.

  • Particularly the specialization things. Software is too complex and complicated today to learn all of it. While you may be able to learn many fields, none of them well. Specialization will result in higher quality products I think. Another problem is time and deadlines. The industry is moving faster than us developers can keep up with. Resulting in shoddy, untested software. At some point though, it's going to slow down.
  • Actually, most of these questions apply to the design of everything in the world. If I'm building a bridge, a car, a building, a plane, a chair, an adhesive, anything at all, these questions apply. So, by your logic, all design tasks should be subsumed under a single profession: engineering. This whole idea of having "Chemical Engineering" and "Software Engineering" and "Civil Engineering" and "Structural Engineering" just has a poor effect on the base strategies that all engineering tasks share. Right?
  • by fhwang ( 90412 ) on Wednesday December 13, 2000 @06:02AM (#562772) Homepage
    One of Lanier's points is that he thinks software should be fragmented in practice, since it's currently used in so many ways. To quote:

    The software that runs your pacemaker is not even considered for a moment to be the same sort of entity as the software that you use to write music. ... If your interpretation of software is that it's like a bridge [and] people need to know what they're driving on, then yes, a little peer review could help. If you think of software as literature, if you're somebody like Ted Nelson, say, then what you really want is groups of people who are emboldened to try wild things.

    I have a lot of respect for Lanier, and I particularly thought One Half a Manifesto [edge.org] was a useful and badly needed bit of skepticism against what he terms "cybernetic totalism". But I think he's off on this point.

    To be sure, different parts of software have different needs. If you're designed space shuttle guidance software, go ahead and engineer for five nines (99.999% uptime). A script that pages you when you get an e-mail from your girlfriend is probably a lot less mission-critical.

    But there are certain questions that are useful to the software engineering process, regardless of what code you're writing. To think of a few, off the top of my head:

    • Who are the users? What are their needs?
    • How quickly are the needs likely to change in the future?
    • How long should the software stay out of obsolescence?
    • How reliable should the software be?
    • How can you decrease the amount of repetitive work that humans have to do?

    Every successful software project needs to ask these questions, preferably sooner than later. Fragmenting software engineering would have a very poor effect on the development of these base strategies, so hopefully it'll never happen.

  • And how many of those projects really matter? Do you know how large the software industry is? Do you realize how many commercial products and in-house systems there are that don't follow good development and testing guidelines?

    There are two ways you get quality in software: have developers who feel proud about their work and put as much quality in it as time allows, and have developers who are paid to put quality in their software it by management who has enough sense to think beyond a single quarter's expenses.

    Right now, the former is a superset of the latter and is probably ten times the size, if not more. In open source projects, the lack of money means everything is by the former, so you end up with really high quality stuff, like Linux and Apache, or crufty stuff that pays a lot of attention to testing, like gcc, perl, and XFree86. It makes perfect sense, since you gotta love what you're doing to work on something for free, or at least, for a fraction of what you could make as a hired gun.

    Sure, you also have all the dreck on freshmeat. But since no one was paid to write the dreck, there really is no loss. Most of us are not under the illusion that a weekend project is comparible to production-quality software. Besides, the yung'uns have to start somewhere.

    --
    Bush's assertion: there ought to be limits to freedom
  • Yeah, it'll be great when users are exposed to quality stable Linux projects like Netscape. From an end user's perspective, there's no real difference between the operating system being taken out by an application and the X session being taken out by an application. If you're lucky you have another computer in the room so that you can telnet to the locked box and kill X. Otherwise, you have little recourse besides the reset switch when the mouse and keyboard don't respond anymore. Either way you lose any unsaved work in open applications, just like in a Windows crash. When running Linux instead of windows however you run the risk of doing quite a bit of damage to the file system during a reset.
    _____________
  • >In terms of deleting multiple folders, GUI's are far more simple.

    That depends on your example...
    selecting junk\ from within C:\stuff\ and [pressing delete| right click, select delete]
    isn't any easier or harder than
    rm -r junk/ or...
    deltree junk\

    if you need to delete some, but not all subdirs that don't have a common identifier, selecting multiple ones then hitting delete in a gui can be quicker than typing rm foo for a bunch of differnet foos... If they have a common identifier (say they all start with feb00 (because you keep things organized), then rm -r feb00* is a heck of a lot quicker...

    While I don't think that children are "taught" any GUI in schools (there is very little to learn about GUIs, IMNSHO - changing between Mac/Win/KDE/ Gnome/FVWM/etc usually requires less than a half day to get oriented, and within a day or two, one should be fairly proficient), some reasons that they may have encountered Macs more than often than PCs in school include:
    - Apple gave a big price break to schools (smart move).
    - The early Macs were a heck of a lot better at most things than early PCs (8088/86, 286)
    - Applications for kids actually existed, and were easily obtainable.
    - The teachers had them, since often they could get them at a discount through the school.

    Now, I've always owned x86 PCs since the end of my C64/128 days, but the Apple GS and early Macs were "ahead" of the early PCs in a lot of ways.
    --
  • Give advsh a go.
    It's the Adventure Shell.
    You walk around a maze of your directory heirarchy and can pick up objects into your knapsack (like exporer's "cut") and drop them in other "rooms", like "paste". Except of course, that the backpack can contain any number of objects.

    Just like old adventure games, you can do stuff like the following:
    pick up wizzo.o wizzo.a wizzo
    go out
    enter libs
    drop wizzo.a
    go out
    enter binaries
    drop wizzo
    destroy wizzo.o

    Yeah, sure it's quicker to use a file manager, but having the "adventure game" pedigree, it does try and understand simple English commands.

    FP.
    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • by Shoeboy ( 16224 ) on Wednesday December 13, 2000 @06:04AM (#562785) Homepage
    When I was 12, I found a box containing a bunch of old issues of Hustler in a lot behing the local 7-11.
    I began to feel sensations I'd never felt before.
    Being a scientifically minded young fellow, I immediately ran home and examined one of the low angle money shots through my microscope.
    That's when I made the horiffying discovery that women are composed of red, yellow and blue dots.
    I've been trying to live with the implications of that discovery for years now, and I haven't been able to care much about code quality.
    --Shoeboy
  • but why is the Linux kernel compressed? Wouldn't that make the loading process longer because it must be uncompressed?

    Rotating storage is slow. Executable compression products such as free UPX [tsx.org] take advantage of the fact that loading a compressed file and unzipping it on a modern CPU-memory system is often faster than loading the same file, uncompressed, from rotating storage media such as hard drives or {C|DV}D-ROM drives.

    Also, I wish that Linux could have a much smaller text editor. Windows Notepad is only 34KB in Windows 95

    You could:
    • Use pico. It's said to be much easier to learn than vim and still much more powerful than Notepad, especially taking into account all the DLLs that Notepad loads.
    • Keep an Emacs session running in the background (C-z). Parts of Emacs will be swapped in as necessary.
    • Use one of the hundreds of text editors available on OSDN Freshmeat [freshmeat.net].

    Tetris on drugs, NES music, and GNOME vs. KDE Bingo [pineight.com].
  • ... and it is Testing! Instead of making deadlines and release dates right before testing, they should be made 15/16th the way through testing. Software is not -nearly- tested well enough. Any QA department will tell you they are always strapped for time. Testing should last longer than the development of the software. Software companies should test like there is no possibility for a patch.

    --
  • by cscalfani ( 222387 ) on Wednesday December 13, 2000 @08:45AM (#562790)
    If you want to change the quality of software, you must first determine why the quality is so bad. Economics is the biggest reason for quality being poor. The reward system in place is one that rewards the company who grabs the biggest market share. Once a product has a stronghold on the marketplace, it is extremely difficult to replace it. Software becomes a de facto standard, e.g. Microsoft WORD.

    So how does a company grab the biggest slice of the marketplace pie? They do so by getting to market first. How do they get to market first? By cutting every possible corner imaginable to meet a ridiculous deadline and releasing a product of poor quality.

    All other reward systems stem from this one. Imagine the case where a company has two programmers. The first hacks a bunch of code together and makes the ridiculously short deadline. The second programmer misses the deadline but produces a much more stable product. Now once the first programmers code starts to fail, the first programmer has to "save" the company by providing short-term quick fixes to the code, reducing the quality of the code further. Management views this individual as a key individual; key because they can't operate without him. The second individual is hardly noticed, except for delivering late, which is considered aberrant behavior.

    So imagine the case where both of these programmers are leaving the company. Which individual do you think the company is going to bend over backwards to keep?

    Economics, being the ultimate reward system for business, dictates that software development will never be able to do what hardware engineers have been doing for years, create micro-components. In hardware, you can create an AND gate and package many of these gates into a chip. This chip can be mass-produced and sold at a reasonable price. Every time a circuit is produced with this part, the manufacture make a little money for each part purchased.

    In software, we can't make a simple component and standardize on it. If we created a component, e.g. a component that compares two dates to see if something has expired, and tried to sell it, what would people pay for it? Maybe a dollar. Most people would look at this component as so simple, they'd rather develop their own. But even if they did buy it, they can make millions of copies of this component with no additional cost. It would be the analogy of instead of buying an AND gate for 50 cents, you bought the AND gate factory for 50 cents. The hardware industry would have gone out of business years ago under such a system.

    Software suffers from the economics constraints more than language problems, management issues, bad programmers, etc. Anyone can buy book after book on proper techniques for managing a team of developers using a myriad of development processes. The reason we see management pay only lip service to process is because the reward system for businesses doesn't reward quality or standards.

    This is the real problem to solve (if there is a solution). Would companies pay royalties for components that were used in their products? Highly unlikely. Will people stop paying for new software because it is too buggy? Maybe, but it is usually better to use buggy software than no software at all.

    After almost 20 years, I have given up on the software industry. As soon as I can get out, I will. The better you get in software design and development, the harder it is to operate in this environment. I don't blame companies for operating the way they do. They are rewarded for doing so.

    So if you want to change behavior then change the reward system. Good luck.

  • ...speaking as someone who supplied some patches to DeCSS, and writes telecoms network management software for money, I see little reason why people cannot be multi-disciplinary. Very few people nowadays are expert in all fields of software, and no one pretends to be, but we can be masters of many areas rather than just one.

    His comments about Linux and Unixes were also off point - he fails to realise that the command line environment is an extremely efficient one for an experienced operator. Point and click environments are great for newbies, but I've yet to meet one that increases productivity for an operator who knows his onions. I do use GUIs and they're great when I'm not too familiar with what I'm doing, but when I know the area inside out its Shell Time.

    I came away from this article feeling he just wanted to be inflammatory; if he'd posted this as a Slashdot comment he would've got so many (-1 Troll/Flamebait) mod points that his karma wouldn't have recovered till the fourth millenium!

  • So why does software suck?

    Because these same pressures...

    Combine time-pressures, market pressures, upper-management pressures, and a lack of training and professional standards, and you have a whole class of employee (the project manager) who has only incentives to lie and hedge, and no incentive to be honest about schedule, feature set, state-of-the-project, internal project problems, etc.

    ... also apply to the people who write those "sufficiently developed" "wealth of frameworks and libraries." I resent people who say that the tools are good but the work we do with them is shoddy. The tools are merely another level of software which, unfortunately, often has all of the same problems as end-user apps.

    Take the apple WebObjects framework, for example. More specifically, the EnterpriseObject backend. Basically this has to do with an OO interface to database transactions; neat stuff. So why was my application crashing every couple of days after eating all the machine's available memory? Because there was an outstanding issue in the framework which failed to deallocate internal database objects. My app crashed and it wasn't my fault, and I'm still trying to find a workaround.

  • "How I hated Unix back in the '70s -- that devilish accumulator of data trash, obscurer of function, enemy of the user,"

    Think he's a little angry? I agree that most software production sucks. always has, and the future isn't looking too bright either. I'ts all because it's writen by people, not machines. People make mistakes. I can't remember the last time I was able to install a program without having to screw around with it in some way. between crashing, unreliability and the anoying habbit of breaking other software on my systems, I usualy spend more time fixing problems with these packages than actualy using them.

  • When user interfaces are designed, which is the better design? The simple pictoral design? (Like thos books for kindergardners), or a powerful and complicated design?

    In evolutionary terms, it's the simple design, because it's easier to learn. Unfortunately, this is a flaw, it will forever hobble us from creating the most powerful design.

    Do not forget, most people in western civilization have to master an incredibly complicated, unnatural, non-intutive task. This task is INCREDIBLY difficult. Most people spend 10 years mastering literacy.

    We force our kids to master a complicated and un-intuitive interface because simpler interface don't offer the power, and flexibility.

    I like powerful tools for powerful and difficult tasks. Pictures can express instructions or handle simple tasks, but only words can express abstract relationships. Without words, there can be no mathematics, no algebra, because pictures cannot express the concept of a 'variable'.

    If you know of UI research that gets around this, I'd love to hear it. Please give me a counterexample. :)

  • You're right. I think he was traumatized by UNIX as a little boy; perhaps some evil uncle with a command line rm'd his favorite teddy bear.

    He offered no useful vision, only a whining lament that accomplishes nothing. A waste of time.

    John

  • Sure, you could make your own engine, using a lathe, scraps of steel you have laying around, etc... but you don't. Why not? It's simple... you want uniform, standard, tested designs.

    It's true that machine language code is non-object oriented, but that's not the issue here. The main objective is to build software components that have known behavior in all circumstances. It's not practical to do that with procedure oriented programming. There is ALWAYS some side-effect in a non-trivial program which leads to bugs.

    Implementing interfaces with known boundaries using the object paradigm is a rugged, well known way to isolate the effects of a software defect to a single section of code. With procedures and functions free to grab at all the available variables, any line of code can take out the whole works.

    It's like using Grade-8 hardware instead of whatever happens to be in the bin at a hardware store. You get a high-strength, durable, part with guaranteed specifications. While it wastes some CPU cycles, it's an acceptable tradeoff for 100% reliablity.

    --Mike--

  • Vim [vim.org] is an _awesome_ vi derivative. Syntax highlighting and a million other cool features blended into a very smooth programming and editing environment. It's pretty small too (~1.8Mb stripped) given all the things it can do. The syntax is pretty easy, in practice (with recent vim versions) all I need is ESC to get to command mode, and i to go into insert (i.e. edit) mode. in command mode :w writes out the file, :q quits, and :wq writes and quits. That's, what, 5 commands? I bet notepad has more menu options than that... :-)

    The first thing I do on a new system is install vim and symlink vi to it. The second is usually transitioning to qmail.


    --

  • KDE2 actually does this, if it can't determine the filetype from the name it runs file(1) on it to determine what the proper MIME type should be. I believe that this information should be stored with the file meta-data, it would seem to be too much of a performance penalty to scan the file every time it is accessed. The MIME type could be determined when the file is created and whenever the file is closed after writing.

    Just my $0.02

  • I read the linked article as well as Lanier's very long essay in Wired.
    Lanier proposes that someone who writes software for a pacemaker be prohibited from writing software for music composition. He compares such mixing to a brain surgeon running a tattoo parlor on the side. I disagree with this idea. I think the mixed backgrounds of programmers provide valuable cross-fertilization.
    The focus of the Wired essay was Lanier's disagreement with 'futurists' who predict techno-utopias or dystopias. He points out correctly that software is far too primitive to power the dreams or nightmares of these writers.
    One thing I found irritating in the Wired article is that Lanier is clearly a Windows user and ascribes many of Microsoft's shortcomings to some deep-rooted problem in software development, when they're actually just caprices of the gnomes in Redmond. If you read the essay, you'll see what I mean.
    Also, he calls Unix 'that devilish accumulator of data trash'. Jaron, maybe you need to take a look at the man pages for crontab, find and rm.
  • That's just ridiculous chauvinism. Unix isn't so great. It has obvious advantages, in some areas, over certain well-known straw horses which Unix devotees like to poke fun at. But it has many flaws too. The problem is that these flaws become accepted as part of the religion: Since they're part of holy Unix, they must be right. But they're not.

    And when it comes to quality, Unix gives you all the tools you could possibly want to make software buggy.
  • Yeah, most of the WMs for X don't allow a lot for keyboard input. I found that KDE does a better job than most (if you want it to), including the most basic of things (though it has gotten less basic in KDE2) - Alt+F2 brings up a line where you can type in a command to run (better be a GUI prog, though - stdout + stderr go elsewhere).

    Neither of the worlds Win/X are even close to ideal, but since both have been grabbing some of the better features from each other, we can hopefully expect the overall usability of each to improve (wow, doesn't that sound like a bunch of Meta-BS)...
    --
  • You should be able to enter 2" by 1" in coreldraw.

    Yes, that is my point. I am not trying to say "CLI only, no GUI", I'm just arguing against the opposite opinion.

    You should be able to say to your computer, "format the hard drive - make it 1/3 of available space."

    Fine. But that's not what I said. 1/3 is an exact quantity, I included the word "about" which has no real meaning to a computer

    The software interfaces do not properly model the way that humans accomplish tasks.

    Humans accomplish tasks according to the interface and tools given them. this happens in real life also. We use improvisation and lateral thinking all the time. We are flexible, may as well take advantage of it. Let computers do what they're good at, and we, us. Don't cripple them just so we can be a bit more lazy.

    I mean, look at the Easel, KDE and Gnome - Brand new user interface layers, and what are they? More of the same old thing - with different pictures and colors.

    Oh look, it's the "Unix is crap cos it's 20 years old" argument with a different subject. While not being a huge fan of GUIs, I have to say that the reason things are duplicated and persist is because they work. We seem to be in an age where we have gone beyond change being good but it's become compulsory.

    Humans are NOT exact things - computer/human interfaces should be designed to interact with humans, not humans having to be trained to become explicit and precise.

    Computers are a tool and tools should be designed with the job at hand. I don't complain when I'm fixing my car that my screwdriver should be able to prevent me having to lie down on the ground under the car and thread my arm through a narrow path between the engine and the firewall, nor do i complain that the whole engine should be dismantleable by undoing a nut on the top of the engine. Humans are by far the most flexible thing in the known universe. Sure computers shouldn't be programmed by dip switches anymore but there is a middle ground and it's not to have computers do all our thinking for us.

    Rich

  • The point is that the play button is an icon. You press it, just like you would press an icon in a GUI. Even icons in GUIs are labeled. The difference is that the CD player isn't just a box with no labels or buttons that you have to remember the commands for

    OK. But a CD player is called on only to do a small number of tasks. Most cd player apps for computers duplicate a similar interface. I think the issue becomes how you handle more complex stuff.

    Rich

  • by Junks Jerzey ( 54586 ) on Wednesday December 13, 2000 @06:12AM (#562831)
    Now, now, now, don't take that as a flame. But what's the average age of a Slashdot reader? 20? What's the average age of a gung-ho Open Source development team? 21? Realistically, you don't have much software engineering experience at that age. Heck, how many Open Source projects have regression test suites? Two?

    I'm not talking about old standbys like Perl and Apache and so on, but most of the other projects that get started by young coders and garner press simply for being "open source" without code quality ever being an issue. I don't want to name names, but examples are plentiful. Having a relatively simple project with 5 megabytes of *compressed* source code should scare anyone off.
  • There's a little triangle pointing to the right, you know that it means play, it doesn't say play

    Yes it does. It doesn't say it in written English but you've learnt that a little triangle pointing to the right means "play". It could reasonably mean "eject tape" or "move player three inches to the right". It says play just as much as the combination of the P,L,A and Y symbols.

    and you don't command the player to "spin up the motor, focus the laser and detector assembly for optimal signal/noise ratio and start reading", you push the play button.

    See, you just proved my point.

    Incidently, it's been shown that accomplished readers recognise whole words rather than actually "read" them so in a certain sense, for these people, words become like pictograms anyway.

    Rich

  • None of the popular OSs out there are truely object oriented, and very few of the development platforms as well. There's no way for the end user of an application to pull back the covers and see all the objects at work, so they could fix it themselves (or at least see the problem).

    Visual C++ is NOT object oriented in a nice way, Delphi is much better, but still not there... Everything is still doing function calls with parameters, dangling pointers all over the place, and the number of layers is going up all the time. It's no wonder that the stuff breaks more often instead of less.

    --Mike--

  • In what way do you think UNIX sucks?

    UNIX was a revolutionary piece of software in the 1970s. According to this chronology [linuxjournal.com] by Ronda Hauben, the UNIX kernel was born in 1969, over thirty years ago. V7 UNIX, which was many people's first exposure to UNIX, was released ten years later, in 1979.

    I don't mean to denigrate the accomplishments of Thompson, Ritchie, Kernighan and the others who contributed to UNIX. The longevity of UNIX is a tribute to their genius.

    If we pick 1979 as the baseline, what has been accomplished in the past twenty-plus years? The Mac and its GUI based OS debuted in 1984. TCP/IP became a standard part of nearly every operating system. Everything got cheaper and faster. And...

    Computer hardware advanced from the 8 MHz 16-bit 8086 and 1200 bps modems to 1+ GHz 64-bit microprocessors with complex and sophisticated architectures, gigabytes of memory and gigabit LAN connections.

    What happened to the progress in operating systems? We have the software equivalent of a Ford Model T powered by an advanced gas turbine engine that produces thousands of horsepower.

  • No matter how refined computers become, no matter how many orders of magnitude of AI we pile on top of each other, no matter how elegant and compact and all-encompassing thing become, there will always be a need for someone who understands the underlying components. Somebody will have to know what is inside of the black box. There will always be McGiver situations where a more elemental understanding of the technology is necessary.

    Obviously, components are the first step, more Lego-ing and less handcrafting of software. But I disagree with anyone who thinks hardware is way out in front of software. If hardware speed were indeed so far out in front, then games would be written in VB or Java or Python or Smalltalk or Modula-2. As it stands today, we're still facing a large performance gradient. Make the gradient go away, i.e., make it so that bulky, overhead-intensive-but-elegant languages fly just as well as tight, super-optimized C or Assembly, relatively speaking, and then you can talk about hardware's arrival.

    I believe GNU/Linux is successful mainly due to politics, but partly due to "technological honesty". Sure, the overall user/developer experience of the Unixae is more primitive than, say, Microsoft Windowses, but what sort of progress has MS really brought? Phony, glitzy bells and whistles is not necessarily progress. Besides COM, MS really hasn't much to offer in terms of elegance.

    Another aspect of the open source/free revolt is that this is the first time true freedom in the computer world has appeared. Back in the late '70's at my university, if you weren't a 3.5 math major, you didn't get into the CS emphasis, and, hence, you never got your hands on a computer; you were just a peasant. Microsoft Windows changed that with Peztold's tome on Windows 3.1 programming, and now, finally, GNU/Linux has broken the field wide open. We really haven't had much time to flex our muscles yet. The world simply hasn't hit its software development stride yet.

    All in all, a few more years of open/free software and true universal hardware speed will turn things around.
  • Re: If you can drag a lasso around the old fruit (sorted by date via the sort bar, one click)

    Take a good look at exactly what "Sorted by date via the sort bar" implies. You idiot, if that was possible it would also be trivial to do that with a CLI-program that deletes by date. In fact the command could be reused with no brain power at all, while the GUI would require you to keep clicking on the correct date (unless the GUI had a field that said "date to delete before", but if you think a usable GUI would provide that and all the billions of other possible reusable commands you are really mistaken).

    An awful lot of attacks on command lines make this same mistake: that the GUI computer magically has information that the CLI computer cannot access. The most common example is "I can click on the document in the GUI and it opens". This requires the computer to know a lot of information, like "what program to run when this file is identified and how to tell that program about it". This information is not magically there just because it has a GUI. If the computer knows it there is no reason a CLI program cannot do it, for instance a shell should be able to let you "type a file name in and it opens it".

    What is true is that the developers of the new Linux GUI are making the same crappy mistakes as Windoze, in that they are not improving the CLI at the same time. Why doesn't KDE provide a "launch" command that takes a list of arguments and does exactly the same thing as the user clicking on those icons?

  • by fhwang ( 90412 ) on Wednesday December 13, 2000 @06:13AM (#562848) Homepage
    I agree with most of what your saying, but allow me to take issue with the idea of logical proofs in software. Not that I've done it much, but there are a few reasons I'd rather not rely on this:
    • The vast majority of software is a question of implementation, not algorithmic logic. It seems like it's a lot more complicated to prove implementation than algorithms.
    • There are currently a very small number of people with both the analytical skills and the severely rational temperment required to do these proofs. So relying on that for QA seems like it would put a really nasty bottleneck on the process.
    • I have this sneaking suspicion that any abstraction of code -- UML, flowcharts, proofs -- is only that: an abstraction. And that correctness (or, more likely, bugginess) lives in the code, and in the code alone, so that's where you have to weed it out.
    • It just seems like it'd be no fun. Don't get me wrong; I try to make my code clean and maintainable and all that. But if I had to run it all through some tortuous logic to prove to myself that it works, I don't think I'd ever want to code at all.

    I personally think Extreme Programming [extremeprogramming.org] holds a high amount of promise. At times it seems like dogma, but in general its principles make a lot of sense to me. Keep those feedback loops tight. Fixed requirements are a myth, so don't ever design for fixed requirements. Well-maintained test cases are as important as coding.

    (Of course, I work in a web shop, and the definition of QA in our industry is "Those colors look right in that Photoshop mockup", so it's not like I really know from experience. Just some random thoughts.)

  • I remember when Six Sigma quality control (3.4 parts per million defects allowed or 99.99966% quality) was the big buzzword around management. Not sure if you could ever apply manufacturing line techniques to software quality, but at least someone started talking about quality. Nothing much ever happened, but I?m sure a bunch of buzzword books were sold.

    Management told the engineers they wanted it, the engineers submitted an estimate of the cost and time to develop (in half the time it takes to do an estimate right because management wanted the figures now). Management got horrendous sticker shock, gave them a fraction of the estimated time/budget reflecting what they could (or wanted to) afford along with external factors such as when the competition would likely have a similar (in purpose) product.

    The developers took that and attempted a features/bells-and-whistles triage but marketing wouldn't budge. They then produced what they could within those constraints: the same old thing.

    I'm not saying that developers can't do better, just that developers are far from solely responsable for the current situation and cannot fix it alone. Management isn't solely responsable either. They have to be responsive to the customers who want it yesterday and will buy from the competition if it is first to market (even if it is crap).

    The tragedy of the commons comes into play as well. There is always a vendor who is willing to produce total crap and use marketing to get it into wide circulation. They compensate for being crap by designing vendor lock-in and anti-interoperability features to keep the other guys from coming in a year later with GOOD software and taking their customer base away.

    Thus, other vendors are strongly driven to produce crapware as well to grab market share (and avoid being killed by network effects). They tell themselves that v1.5 will be what 1.0 should have been. Unfortunatly, they are in a race with crapware, inc. to release v1.5.

  • First off, I wanted to thank you for the Jurassic Park reference, which made my laugh out loud when I heard it for the first time in the theatre.

    I also wanted to agree and expand on the idea that software sucks because the market says it's OK for it to suck. In general, the market now thinks it's OK for a LOT of things to be pretty mediocre - cheap plastic merchandise, McDonalds, TV. Software fits right in that group.

    On the other hand, because it has been OK for software to suck for so long, I totally agree with the point made in the article that humans, as a group, have not really figured out how to make software. We haven't really needed to. I feel all current ideas on managing software projects are pretty much wrong at the core of things, and until we figure out why there's a lot of floundering to be done.
  • by Gefiltefish ( 125066 ) on Wednesday December 13, 2000 @06:15AM (#562854)
    Lanier said,
    "Basically, if you heard your brain surgeon also had a tattoo parlor, you'd probably demur. Right now we think of them as the same thing. We think it's perfectly all right for people to go back and forth. I don't think it is."

    This is a fascinating suggestion and I think, aside from the flamebait in the article for Linux lovers, this may be the most important idea expressed in the interview.

    That idea is that the people writing software should not only be generalists but software specialists(though I'm sure generalists are needed in software engineering just as in medicine). In this statement he both elevates software engineering to the level of medicine (which it certainly deserves given its importance in so many settings), but also calls for a more systematic approach to the social role held by coders.

    Granted, while physicians are not perfect, their professional development model might not be a bad one to follow in the field of software development. Study fundamentals and then actual applications. Work under the tutelage of those more experienced. Increase professional independence. Share new ideas with peers and undergo peer review.

    I think that a shift of this nature would definitely help propel software development leaps and bounds. And though it may be a good idea, what can be done to push things this way?

    food for thought anyway...
  • In the beginning there was nothing, and then all of a sudden there was a big explosion and nothing became time and space. Twelve to twenty billion years later a bunch of creatures that use their two limbs to stand straight and the other two to masturbate and click on the NEXT button of their pr0n site are arguing about quality of something as abstract as software development process.
    Things have most certainly changed.

    Before the 1995 hundreds of software writers breathed, ate and shitted with Assembler and C. The only few clear moments in their lives worth remembering were the times when they found solutions to some obscure computer problems at the lowest level - at the hardware level - these were the junkies who coded bit by bit, only caring about performance; computers were slow.
    Today armies of so called IT professionals are really trying to make guesses, as well educated as they possibly can, about scalability of multitiered web and wireless based systems, that must serve millions of customers, mostly in real time (rarely with back bone batch processes.) These are different breed of people, they don't need to know too much about the Assembler behind their Java and VB virtual machines.

    Obviously it is hard to design software that works like Star Treck's does but it does not mean that software today sucks, it only means that this software is the proto software, it is the evolution that uses simple patterns with countless iterations to produce complex systems that at some point in time will most certainly be integrated together into one huge breathing software monster. There will be no question about AI anymore, the software will live its own life all around us. Be prepared.
  • I think adding attributes to files is a mistake. This is because it does not match the way a user thinks of objects. If you have a car, you can tell it is a car by looking at it, not because it has a tag next to it that says "car". If you give the car to a friend, he can also tell it is a car, even though he does not get the tag.

    The solution is to analyze the contents of the objects. Like the Unix "file" command, which runs rules that examine the starting bytes of a file to determine what it is. The advantage here is that when the file is copied somewhere, it's "type" is preserved, because it is part of the data.

    What is missing is a file system where accessing the first 100 or so bytes of a file is as quick as reading that file's name, so that we could display the type as quickly as the current filename or attribute based hacks. Does ReiserFS or anything do this?

    There should be a command line program to do exactly what "open" does in Explorer, or clicking a file in KDE, and we should require GUI's to exec this program so that all the GUI's agree on this system.

  • From the article: "I mean, everything in Unix is ultimately based on command line interactions. You can try to overcome that, but it's very hard. Unix's whole philosophy on how to do internal management and how to manage timing is based on that set of assumptions,so you have to fight it at a thousand levels."

    He might as well have bitched over the binary nature of computers. Like so:

    I mean, everything in computers is ultimately based on binary interactions. You can try to overcome that, but it's very hard. The whole philosophy on how to do internal management and how to manage timing is based on that set of assumptions, so you have to fight it at a thousand levels.

  • "[But] this breathtaking vista must be contrasted with the Great Shame of computer science, which is that we don't seem to be able to write software much better as computers get much faster."

    I don't get it. How am I supposed to write "much better" software with a faster computer. Did anyone ever believe that "quality of products" and "horsepower" are related?

  • by donglekey ( 124433 ) on Wednesday December 13, 2000 @10:58AM (#562881) Homepage
    I have always thought of creating a program like making a samurai sword. I know it sounds weird, but here's why. A sword is supposed to be an elegant and simple tool or weapon that really has many uses. Software, likewise should be the same way. You wouldn't want a sword that was mostly handle would you? what would be the point of that? In the same respect you wouldn't want a web browser that was all buttons and had a small windows for looking at a web page. And you wouldn't want a heavy sword, that's dull and breaks all the time would you? (think netscape 6). This is how I would break down some of the software that I use, using the sword analogy.

    1. Windows - Large sword, somewhat dull, extra protrusions that make it only fit in their custom sheath. Dullness makes it harder to cut yourself, so it is more for those who don't want to develop enough skill to use a sharper sword, don't think they need one, or are scared or cutting themselves.

    2. Linux - balanced different, awkward at first, but better once you get used to it. Really sharp, really durable, doesn't break, cuts through most problems easily. Not for the uninitiated. Blades everywhere, flexible but complex. Not just one knife, but lots of specialty knives designed for cutting through anything, each one for something different.

    3. Winamp - a sharp, solid dagger, lightwieght, customizable with different blades, different designs for the handle.

    4. IE - seemingly small handle, large blade, elegant but very fragile.

    5. AIM - cuts through most of what it needs to, but is getting kind of heavy (bloated) not as bad as ICQ I think.

    6. Gimp - Solid, sharp small handle, long blade, even comes with tools for sharpening. (think script-fu)
  • by BluedemonX ( 198949 ) on Wednesday December 13, 2000 @10:58AM (#562883)
    For starters, let's say that I've a lot of respect for what Mr. Lanier achieved in the early 90s with VR, and understand that whole pop-semiotics/Mondo 2000/Boing Boing/Modern Primitive/Cyber-everything vibe he's coming from.

    But allow me to reiterate that HARDWARE has a DEFINED FUNCTION. Bits go in, other bits go out. Key gets turned, engine starts. I buy a car and a plane and a boat - I don't expect to buy one vehicle that can fly, swim and roll on dry land. His problems with modern software stem from his paradoxical and contrary wishes for software engineering.

    Lanier seems to have a scattergun theory as to why software sucks. It's either "the engineering isn't there" (definition A) , and/or "what it does for the customer doesn't empower him" (definition B).

    To the first part I say software is considered by its mercurial nature not only something that CAN'T be engineered, but MUSTN'T be engineered. Cause if you make plans and stick to them, then marketing can't decide at the 11th hour to retool everything to include holographic agents and a Star Trek like voice input (after a night of cocaine and "brainstorming"). To whichever jackass points me to "if programmers built buildings a woodpecker could destroy civilisation" I say "make up your goddamn mind whether you want a condo or a three storey house BEFORE we pour the concrete, and we'll talk." No other form of engineering comes up with the ludicrous idea that plans are mere suggestions and subject to change at any time, to produce a product with no real set function (is Word a word processor, a typesetting program, or an HTML editor? MAKE UP YOUR MINDS). However, coming up with small, ruthlessly efficient software entities sounds like UNIX, which he says sucks by definition B of his rant.

    As for his ideas that programming languages and interfaces should be humancentric, let me put it this way - computers process DISCRETE SYMBOLIC pieces of information, humans process PATTERNS and CONVOLUTIONS. I don't expect my shovel to shovel my walk for me, why should I expect my computer to respond to me drawing out what I want with magical icons and lines and all other forms of strange ideas when it comes to PROGRAMMING? (Visual Language?) Any machine that can accomplish such a feat would have so much software bloat by definition it sucks by definition A. A computer is a tool, not a mystical genie that's going to solve my problems for me. If I want a secure Internet connection, it behooves me to either learn how TCP/IP works, or hire someone who does. The answer is not to have a big red button that says "secure my system" cause I can assure you, on a system like that, someone's working on a big GREEN button that says "hack this system."

    Most software sucks because people don't want to take the time to figure out what it should do or who wants it. Marketing seems to think that rather than research what a product should do and who to sell it to, it's just there to listen to every potential customer ("I want a minivan". "I want a sports car." "I want a quarter ton pickup") and spit back every ludicrous request ("We've got to come up with a sports quarter ton minivan!"). Management seems to think that rather than getting specific specs by a specific deadline and then staffing the needs of the project, giving the engineers and testers the tools, and then getting the hell out of the way, it's all about constant "meetings" and Microsoft Project bar charts.
  • by gardenhose ( 85937 ) on Wednesday December 13, 2000 @06:20AM (#562890)

    OK, you get home around 7 and there's a stack of fruit on the floor. Some are kinda old, moldy, some are green, some are red. You want to throw away the oldest ones, a few of them, but not too many. And definitely all the purple ones.

    So you tell your roommate: "Get rid of all the old fruit and all the purple fruit."

    Type that in to your CLI:

    $ get rid of all the old fruit and all the purple fruit

    It won't work. NLP, sadly, just isn't there yet. We need a GUI for these choose-and-do tasks. It makes sense, honestly. If you can drag a lasso around the old fruit (sorted by date via the sort bar, one click) and then eyeball the purple ones, (click-click-click), that is the most natural way of getting things done Communicate? You're not communicating, you are giving orders.

  • by kmcardle ( 24757 ) <ksmcardle AT gmail DOT com> on Wednesday December 13, 2000 @06:23AM (#562924)
    I think that what he was trying to get at is that software design and coding need to become more fragmented. A robotics engineer and a civil engineer are both engineers, but I really don't think the two could switch jobs that easily. They use many of the same methods to design things, but the end results are much different.

    How many centuries did it take for engineering to develop? Modern software design is only about 50 years old.

    Skyscrapers are built in a few years. How many years of planning go into those few years of construction? How many thousands of people are involved? Compare to your typical software project. Large programs are written in 18 months (yeah right). How many years of planning go into those 18 months? How many thousands of people are involoved?

    I would argue that software is not yet a repeatable process. It can be, but Lanier is right. Humanity does not "get" software design. It will eventually "get" it, and we will all be better off.

  • It appears he hates Unix, not software as such.

    Indeed. In fact, he complains:

    You can't just slap any arbitrary user interface onto Unix, because Unix dictates its inner self onto all layers that ride atop it.

    But one of the beauties of Unix is that its inner self is so good to start with, which in turn actively benefits any user interface you slap on top of it. The fact that Unix has survived so long at all is testament to that. Sure, it's not perfect, but Unix has very solid foundations, and they really aren't holding back anything.

  • by Junks Jerzey ( 54586 ) on Wednesday December 13, 2000 @06:36AM (#562950)
    Did you read the original paper at edge.org or did you just read the fluffy overview that this story references? The original paper is several months old and is much more detailed. It's somewhat surprising that the Slashdot guys missed the original paper completely.
  • by jaf ( 121858 ) on Wednesday December 13, 2000 @06:29AM (#562953) Journal
    Your roommate understands english, a highly irregular and ambigous language.
    You computer understands a different language.

    find . -atime +7 -exec rm {} \; rm *purple*

    it takes maybe about 5-10 years to learn how to speak english fluently. Shell scripting or even C/C++ is not that difficult.

    Don't fight the command line. It is your friend.
  • by kfg ( 145172 ) on Wednesday December 13, 2000 @06:29AM (#562955)
    They are annoying, time destroying, silicon paperweights.

    Much, MUCH better.
  • by eXtro ( 258933 ) on Wednesday December 13, 2000 @06:32AM (#562972) Homepage
    The command line is the least natural way to communicate, though that doesn't mean its always the least efficient. When you get home at night, and you MOTAS/roomies ask how your day was, do you
    A. Draw pictures and icons of how your day went. B. Tell them in words how it went.

    You tell them in words, but this is completely different than interacting with a device. Many of the words you use are iconic as well, i.e. figures of speech or metaphors. Consider that Lanier "pisses you off", its a metaphoric way of saying that his point of view is incompatible with yours. We understand what you mean even though "pisses you off" doesn't translate directly to "point of view is incompatible with mine". Body language can be purely non-verbal yet every bit as expressive as verbal communication.

    Consider an interface to a common device, a home entertainment system. It's almost purely iconic in nature. You press the on button on your remote control, which may be labelled as "ON" or with "1/0" or just green. This is an icon that is seperating you from the real physical hardware and masking the complexity from you. You don't think in terms of "I'll insert a conductor into the circuit, thus completing it and allowing the flow of electrons". You think "I'm going to turn on the television". The interface to your VCR or DVD player is the same. There's a little triangle pointing to the right, you know that it means play, it doesn't say play and you don't command the player to "spin up the motor, focus the laser and detector assembly for optimal signal/noise ratio and start reading", you push the play button.

    For most people this is how computers should be. They're tools, incredibly complex tools, but the complexity should be hidden behind simple metaphors. It isn't most peoples business to know about how computers do things, only that they do it. Programmers need to know the how, and they have languages that allow them to control it via a command line environment. Of course PERL, C++ and CSH are all metaphors. In reality you're controlling the flow of electrons through some 10's of millions of transistors. Oh, wait, the transistor is a metaphor too. If you want to be really accurate you're controlling the doping of semiconductors. But wait, there's more, thats a metaphore too, on the atomic level you're... Wash, rinse, repeat. The complexity never ends.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...