Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Towards The Anti-Mac Interface 208

Pointwood writes: "Joakim Ziegler (Webmaster for Helix Code) has written an article about where computer interface design may be heading. It takes a paper called "The Anti-Mac Interface" written in 1996 by Don Gentner and Jakob Nielsen and take a look at where we are now. In the Anti-Mac Interface paper, Don Gentner and Jakob Nielsen explore the types of interfaces that could result if they violate each of the Macintosh human interface design principles." Excellent article.
This discussion has been archived. No new comments can be posted.

Towards the Anti-Mac Interface

Comments Filter:
  • In fact, it might seem that the path from UNIX text mode interface to Anti-Mac interface is a lot more natural than from Mac or Windows to Anti-Mac. For instance, the typical X screen, be it using GNOME, KDE, Afterstep, or FVWM2, is a lot less metaphor-laden than a Windows or Mac desktop. Where the Mac and Windows desktops show your computer, containing different drives, documents all over the desktop, etc., this is much less common on an X desktop. The Mac and Windows interfaces are centered a lot more around a metaphor driven (desktop, folder, document) experience than the X desktop, where, even though file managers exist, users are more likely to open a shell window to manipulate their files.

    Also, ever since CDE, all the way up to GNOME, X desktops have typically had a launcher panel, which can contain buttons to start applications or perform other actions, as well as embedded programs that show system information. This interface is a lot more abstract and power-user oriented than the "let's model your office" metaphor of the Mac. Notice that Windows has actually moved away from the Mac interface lately, towards more abstract representations of the user's environment, with such things as the task bar and start menu, "recent files", and so on.

    Now, I agree that there are a lot of things wrong with certain elements of the Mac interface, but the idea that the typical X setup is less metaphor-laden than Mac or Windows is absurd. Most file and window managers for X attempt to be as much like the Mac or Windows interfaces as possible (with varying degrees of success). File managers use the same "folder" and "document" icons as everything else. The bottom layer of the screen is invariably referred to as the "desktop." (etc etc) This doesn't become any less true because most X users prefer to use the command line.

    Besides, I don't see how the launcher used on many X desktops is any most abstract than, say, the Mac's launcher or its control strip. How is the Windows start menu any different than the Apple menu? How is the Windows "recent files" feature any different the Mac's "recent files" feature? Apple has been "moving away from the Mac interface" as much as anyone else, apparently.

    It may be correct that true UI innovation is more likely to happen on UNIX than on other systems -- it is hard to imagine Apple or MS discarding the interfaces they have invested so much in. We shouldn't kid ourselves into thinking that X is closer to this ideal than other systems, though.

  • by Anonymous Coward
    How about Anti-Linux? Take the basic linux design principles (poorly documented, tacked-on GUI, poor hardware support, hypocritical zealot users, poor choice of high-end applications, penguins, the list goes on) and reverse them.
    Whaddaya get?
  • Thanks for the info. I don't spend a lot of time at video arcades anymore. Maybe the use of the term "automatic transmission" was inappropriate. I guess what I meant to say is that they are clutchless transmissions. I think that is correct and is more important to my comments anyway.

    As a side note, while a true automatic transmission would indeed be worthless for race car driving for the reasons you point out, I have heard that it is the transmission of choice for four wheel driving. It gives the driver more fine grained and even control and acceleration at slow speeds.
  • Windows 2000 attempts to do the 'scaling' thing. Menus (from teh app menu, and pop-up context menus) show the 'most common things'. If you don't see what you need, you can hover over a little 'arrow' which opens up the full menu. Select what you want. Next time you use that menu, that item is now part of the abbreviated menu. Menu items you use most are made available in short easy to see/find menus, while everything else is hidden-yet-easy-to-find so it doesn't clutter up and overwhelm users who don't need it.

    It'll be interesting to see if that's necessarily a GOOD thing or not...

    - Spryguy
  • I guess my point is that GUIs simply follow the 80/20 rule, and that is by design. Arguing for the CLI by using the 20/80 case is really an appeal only for super-power-users and system admins.
    --
  • Or more exactly clutch pedaless (pedalless?) transmissions. You can't be too careful around here.
  • I hadn't heard of the anti-mac paper before, so I read the new article, then I went and read the original. Is is just me, or does the newer article come across as a strange skewing of the information in the first one? I get the impression that the author of the newer article is trying to use the anti-mac paper to support his own interface ideas, ideas which don't really seem to fit the anti-mac model at all. They sorta do, but overal there's no new information there worth reading.

    The original is a classic. Definitely read it, when you find the time.
  • I thought Mac happened. No?

    Yes.

    sig:

  • "...but I can drag an icon from one directory to another faster than I can type "copy readme.txt d:\rogue" or, for that matter, "copy reade.txt to d:\rogue"..."

    But that is exactly what I have not been experiencing, myself.

    Using kfm under kde, or (erk..) Windows Explorer, I spend 'way more time getting the windows sized so I can see both directories, or clicking one directory open and then clicking open a subdirectory, and then what if I want to move a set of file that aren't contiguous, I gotta control-click each one...

    I can type a command-line task that includes a complex rule to select and move a specific set of files 'way faster..

    t_t_b
    --
    I think not; therefore I ain't®

  • gotta agree there...

    remember back when application had *every* command they supported listed in the menus? including bold and italic?

    i miss those days.

    sure, i'm all for toolbars and context menus, but i want a reference of *everything* a program can do, so i can find and try all the thingies.

    i'm sure most ms word users have no idea that there is a very useful "calculate" command in word. they took it out of the menu in word 95, IIRC.

    z.

  • Okay, my apologies if you weren't actually trolling.

    Your point is taken, but I'd rathar see both GUIs AND CLIs fully developed, rathar than mastering one and discarding the other.

    john
    Resistance is NOT futile!!!

    Haiku:
    I am not a drone.
    Remove the collective if

  • I always thought the future of the UI would be the snazzy interface everyone in those "next generation" Star Trek series were using. Part conversational, part touch-screen. Easy to mock the movies and television, but a lot of the envisioned user interfaces in movies and shows come from people who don't know how to use computers... which may not be a bad thing.
  • See, everyone wants cars, but some people want Ferarris, and some people want Toyotas.

    Actually, we all want Ferarris, but most of us are forced to drive Toyotas :)


    ----------
    AbiWord [abisource.com]: The BEST opensource word processor

  • Oh, what the heck. Teach the computers Esperanto!
  • I love that my new motherboard doesn't have any old ISA slots, thus forcing me to finally upgrade my AWE64, which is a little obsolete

    well, i just upgraded my machine, got a new mobo, processor and everything, and then stuck my soundblaster pro in it. go figure.

    z.

  • I'm certain the argument was meant to be that a computer's interface isn't set in stone. It can adapt itself to the user in ways unlike wood adapting itself to the saw or fabric adapting itself to the scissors and thread.
  • by flieghund ( 31725 ) on Saturday July 22, 2000 @05:54PM (#913245) Homepage

    Anyone who is even remotely connected to UI development should read this article (especially the folks over at Mozilla). That being said, I have a couple of issues with some of their examples and ideas:

    • Direct Manipulation. Couldn't agree more with their premise, but I think they used a poor example with the program installation problems -- that the user has to copy files all over his or her system to install a program. (I realize that this is still an issue with many linux programs, but the last time I had to copy a file to a specific location in Windows to install a program was back when I was using Windows 3.1 and the install instructions were "copy game.exe to c:\". To me, a complicated install procedure is more a reflection of the program's designers rather than the UI.
    • See-and-Point. While I agree with the fact that pointing and clicking is only a few steps from pointing and grunting, I find that this is more a limitation of I/O devices (mice, monitors) than the UI itself. I'd like to know how users familiar with alternative I/O devices feel about their functionality.
    • WYSIWYG. A tangential question only: isn't there supposed to be a GLOSSARY tag in HTML (maybe not called that) that allows you to embed information in a particular word/phrase? Or was I smoking crack?
    • Feedback and Dialog. This can be taken too far, for example, if I have to go out of my way to find out what is happening on my computer. (I find the Task Manager in Windows is about as removed as I can handle.)
    • Perceived Stability. The problem with letting my computer do things for me is that my computer is stupid. And that's not just because it runs Windows. 8^) No, as the old saying goes, a computer is only as smart as the person who uses (or programs) it. I imagine that in a few years, when AI is much more advanced than it is now, computers may actually be "smart," but until then, I'd prefer my computer to not mess with anything unless I tell it to. (Does anyone else find Windows 2000's so-called "optimizing" menus to be incredibly annoying? Especially when it reorders them...)
    • Aesthetic Integrity. This is a great point that cannot be overemphasized, but I fear most people will just glaze over it: it is okay to be flexible in the placement/arrangement of most features in a UI, but there are certain critical features (most importantly: HELP) that should always be easily accessible and similarly located. Come to think of it, HELP may be the only critical feature of this kind.
    • Modelessness. I (think) I understand what they are saying here, but didn't they argue against this earlier in the paper when talking about adding layers of unnecessary complexity? (The earlier mention was in regards to metaphors; how does having to get out of "typewriter mode" and into "game mode" differ from having to leave the "office building" and enter the "game arcade"?)
    • The Central Role of Language. For the most part, right on. However, their example of the reference library makes me think of the infamous Office Assistant.
    • Richer Rep. of Objects. OMG yes. I feel that the dot-three extension inherited from the world of DOS is one of the most horrific pieces of legacy technology in use today. Don't get me wrong -- it's nice to know that a .txt is a text file and a .tif is a TIFF file. But what happens when you get handed a disk that has a file labeled "BFD-008jai" and your OS has no idea what created it? Wouldn't it be nice if there was some meta data appended to the file that said "created in the gimp"?
    • Shared Control. From the paper: [Computer-based agents's] capabilities have not yet progressed beyond the most simple-minded tasks. That's for damn sure. Two words for you: Office Assistant. The only reason I allow it to exist on my system is because I have a soft spot for cats. 8^)

    In conclusion, I offer my own disclaimer: I loved the paper. Anything above that may seem to be critical is only so in a constructive way. And finally: I can't wait for the Anti-Mac UI to arrive!

  • Actually, the "potential uptime" argument is irrelevant. It's not that Windows crashes after X days, it's that it has an X% chance of crashing in any given minute that's the problem. It crashes often enough ten minutes after it boots.
    This is less true of NT, but even a professionally-administered NT workstation that's only on during the day will lock up cold once every 3 months or so.
  • The entire point of a GUI is that *all your applications look and act the same way*.

    Actually, IIRC, this is one of the Mac ideas that the writers of the paper on the Anti-Mac interface discussed breaking. It's true that it's great to have all of your apps work the exact same way when you're learning them because it makes it less work to learn. Unfortunately, ease of learning, which the Mac interface gets down quite well, does not necessarily translate well into ease of use for true experts- the people who often spend the most total time using an application. The user interface should be driven by the logic of the program being interfaced with, not with some rigid set of UI design documents.

    That's not to say that program designers should be free to do things however they damn well please. There should be a well agreed upon default standard of where each menu item goes, etc. But programmers should be allowed to break with the standards when there's a good reason to do so. The goal of standards is to make things work better, so we shouldn't stop programmers from departing from the standard if it enhances the higher goal of greater usability.

  • I just realised that my analogy was slightly off. I meant to say that a saw doesn't have the ability to recognise the user's skill levels/training/habits to adapt itself to the task and scissors and a sewing machine cannot adapt itself to its user's capabilities as such. Computers, with better programming can.
  • "cpu speed and memory are about 10 times what they are today"
    "by then I'll be dead"
    What, you can't stay alive for 6 years?
  • Unix: almost everything is accessed by a file api
    plan9: everything is accessed via a file api
    anti mac: everything is accessed by a corba interface, inherited and extended from a say [a file like api plus extensions]. How you handle this graphically is an exercise left to the reader. :)
  • by Anonymous Coward
    The point of automobiles isn't automobiles, it's tranportation. You still have to put some time and effort into learning how to drive. A computer is a tool but it's a complicated tool.

    If all you want to do is email and write papers, you shouldn't need a whole lot of training. If you want to cut a board in half, you can probably figure out how to use a standard saw. If you want to hem a pair of pants, you can probably figure out how to do that. But you don't walk into a woodshop and expect to be able to craft fine furniture without any learning curve. You don't walk into a seamstress shop and expect to be able to make commercial quality fashions without any training. And you shouldn't expect to set down at a computer and become proficient at it's use without some sweat and effort as well.

  • by gwernol ( 167574 ) on Saturday July 22, 2000 @09:17AM (#913252)

    But my point: GUI is not the future. Real conversation is the thing. Combine a computer smart enough to have a conversation with a WWW filled with advanced standarised meta-data tags, and you have it.

    "Now draw a line from just above the middle of the window to about 78% of the way across to the right"

    "Like that, Dave?"

    "No, down a bit"

    "Like that, Dave?"

    "No make it a little longer on the right"

    "Like that, Dave?"

    "No, I want it at a slightly deeper angle"

    "Like that, Dave?"

    "No, I guess slightly less of an angle"

    "Like that, Dave?"

    "No, it should start higher up"

    "Like that, Dave?"

    "Damn it, I'll just use the mouse"

    Voice commands are not good for many things you want to do with your computer. We'll have GUIs around for a while yet.

  • I hate to be the grinch...

    Yes, all that would be very nifty, but it how much memory and CPU time would it take up? I mean to have to keep all those icons in memory and to have to figure out what files get what icons (combinations of like 5 different categories) every time any folder is opened, well I don't know... And if they are generated on the fly by just reading the file info and creating the icon right there that would kill CPU time and just take forever.

  • No way. It's not science fiction if it already exists!!!! The real science fiction is tapping right into the brain. None of the associated problems with "talkies", such as the "bag" problem (get 100 people in cubicles all talking?) and sore throats.

    Talking is fine in a very limited setting (a cell phone number-dialing interface, for example). Otherwise, it's too low bandwidth, lacks any formal notion of demarcation ("oops, made a mistake, please delete back until the last time I said 'foo'", versus just backspacing on your keyboard), it probably lacks feedback as to what you've input so far (unlike a characater-oriented interface), and so on.
  • Yeah, I know. I use Office 2000 daily. And those damned menus are one of the first things I turned off. YMMV, of course.
  • This comment is too long and uses too many words. Can't you just give me a picture or an icon that I could click on instead? Maybe a series of pictures representing your ideas so I could just recognize them instead of having to recall all the meanings of these complex words?

    (Hint: hierohlyphs went out of fashion a *long* time ago.)
  • Sorry, but after using a GUI a lot (with all its direct manipulation via mouse and left/right clicking, and with the ability to see lots of stuff at once), whenever I go back to a command command line I feel like I'm programming blindfolded with gloves on.

    Typing long command lines with tons of arcane switches and options (which are different for every damn command) and where a simple easy-to-make typo can render an entire command 'invalid' is NOT a good UI.

    You'd have to be an IDIOT to think so. Or at least an arrogant boob who's forgotten how much time and effort it took to get even remotely proficient at doing anything in a CLI.

    However, I do think CLIs have their uses. They can definitely COMPLIMENT a GUI very well, as there are things it can do better, while there are things a GUI can do better as well. And frankly, somethings a GUI can't do well is just a weakness in the designs of current GUIs, and NOT an 'inherent limitation' of GUIs.

    - Spryguy
  • by Pope ( 17780 ) on Saturday July 22, 2000 @09:21AM (#913259)
    Most Mac people, myself included, are artists and designers. What we're doing is inherently graphical, and a good consistent GUI is a *must*. I have seen a lot of Themes and Schemes for the Mac, but I stick to plain old "Platinum" because it works for me by not being in the way. Like driving a car, once you know the basics and practice, the UI should completely disappear and become second nature.
    All I can see from skinning and UI schemes is a way to make something look cool but not necessarily function cool.
    Your text-based input idea would not work well with graphical artists:
    Computer, draw a slightly hazy sphere with a 20 to 50% gradient about 60 degrees from vertical starting and coordinates (120,35) on Layer 2 of the frontmost window
    Please repeat command

    On top of that, talking to a computer this way is slow and inefficient. How long does it take to grab a mouse/other input device and quickly click a few places and hit some command keys, as opposed to describing in excruciating detail what you want?

    Pope

    Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
  • I think it is only a matter of time before we see some sort of standard interface to application-level attributes. I mentioned this in another comment, but I think this is best done in a separate userland library, not tied to any particular filesystem or desktop environment. GNOME and KDE are both heading in this direction, but I consider that too far up the UI food-chain. This should be done at a level only slightly above the standard C library.

    Its almost like we need another variable type in our programming languages...

    1. Local Variable.
    2. Global Variable.
    3. System-accessible Variable.
    4. Static's, etc.

    Should be fairly easy to implement this as a C library. We could also have var-access functions defined fairly easily too - i.e. a program declares and exports as a #3-type, a method to be used for modifying and reading that a #3 var...

    This would be *VERY* easy to build, and we just set up a user-land server to marshal all calls for this data, with its own security mechanism.

    If it could be a slide-in C library that worked, and had a good overall design, this could easily become fairly stable in Unix...

    Heck, even some library that allowed an app to pass off it's var's as XML-typed data to a socket somewhere would be quite handy.
  • When implemented on a traditional file system, that kind of library-based solution will always end up being inefficient and sub-optimal ...

    I disagree. While it is true that there will be some speed benefit to keeping metadata attributes in the same place for all files, such benefits will be insignificant for any application I can think of. Additionally, by doing things in userland, you can have multiple sources of metadata, all presented using the same interface. In other words, a file need not use the filesystem metadata to present file metadata to applications. As an example, if this is done in userland, your file manager shell could still get information about the "Artist" of a MP3, even if the program that put the MP3 into the filesystem didn't set any filesystem attributes.

    ... it'll feel like a dirty hack to the application programmer ...

    There is absolutely zero reason why this would have to be. A library call should be transparent. When you call GetFileAttributeList(ReferenceToAFile) (or whatever), you should get the exact same information regardless of where the metadata is stored. If it feels like a dirty hack, it is because of a crummy implementation.

    ... you'll reach a point when it's preferrable to just use a full-fledged DBMS and get it over with.

    And, with proper implementation, you can still use the exact same API, even if you have gone to a full-fledged DBMS for whatever reasons. All the more reason to do it in a userland library. OTOH, if you force this into the kernel, you're stuck with what the kernel knows about.

    That's why I say it doesn't belong in the kernel.

    ... instead of having a thousand statically-compiled (from one single big-assed C procedure) programs of a few dozen K talking to each other through a half-assed character stream system (piping? WTF?), it's better to have a million dynamic modules of a few hundred bytes, with well-defined interfaces in a high-level declarative language.

    Ummmmm... kay... correct or not, what's this got to do with file attributes? :-)

    Note that my +1 bonus remains turned off

    Feel free. But you're using the system wrong. The point of the moderation system is not to give people a warm fuzzy that someone liked their post. It is designed to rate posts on their overall value to Slashdot readers. If you're getting a score bonus, it is because your posts are consistently moderated up, and thus your posts are considered (on the average) to be more valuable to the Slashdot readers. This automates the moderation process and lets moderators spend their points elsewhere. By using the "No Score +1 Bonus" option, you're effectively karma whoring. Not what you intended, I'm sure, but if you think about it, that is what you're doing.

    The "No Score +1 Bonus" should be used if you think you're actually writing a post significantly lower in quality then what you usually write. A recreational flame, for example.
  • *yawn*

    Nothing like quoting things out of context and ignoring the rest to make your point, no? He wasn't saying computer _should_ be hard. Let's quote a part you didn't deign to quote:
    If all you want to do is email and write papers, you shouldn't need a whole lot of training.

    And another:
    You don't walk into a seamstress shop and expect to be able to make commercial quality fashions without any training.

    Yeah, it'd be great if we could all walk up to a computer for the first time, and have it do exactly what we want with no fuss. But we haven't gotten to that point with any other form of technology, so why expect computers to be there right away as well? Even Macs are still a pain in the ass to use.
  • by DragonHawk ( 21256 ) on Saturday July 22, 2000 @06:39PM (#913268) Homepage Journal
    What we want are high-level objects with implicit structure. Text files with XML structure are not good enough.

    Can I ask why text files with XML structure are not good enough?

    You also have to consider the issue of multiple agents. This requires a clear-cut, high-level model of interaction between systems, which, of course, doesn't exist (nor could exist) on Unix.

    While I agree that such a model does not exist under Unix, can I ask why you think it cannot? I generally agree with you that a lot of the things UI-ish talked about, both generally and with this article in particular, aren't there yet on Unix, but by and large I think it is only a matter of time. I see no reason why the Unix security model in particular cannot be extended to support safe, shared control of user applications.

    I think it's worth pointing out that safe, shared control in general is hard to accomplish. Almost all (working) security implementations use the principles of KISS and system high: Larger, more general security compartments tightly segregated from other systems. Consider: chroot() jails, mapping all remote access to nobody, Java, firewalls and DMZs. All of them are basically variations on the sandbox concept.

    The reason is that designing and implementing a system which allows safe mixed-mode secure operation is very difficult. People make mistakes, and complexity makes mistakes harder to find and harder to prevent. The solution to this problem of limited human capability is the sandbox model. Separate systems with tightly controlled interfaces are easier to design and debug then mixed-mode, complex, interwoven systems. I don't think we'll see this changing anytime soon.

    And, since I know someone is going to bring this up: No, simply stating "Use a capabilities model" is not going to solve all your problems. Capabilities (just like everything else) provide no additional security simply by existing; they have to be applied properly for them to work. Applying them properly is the crux of the matter, for there you run into all the same complexity problems you do with everything else. Capabilities are interesting and useful, but they aren't a panacea.

    ... the system should be developed in a high-level language itself, which could be used for remote scripting and automation as well.

    I think Sun calls that Java. :-)

    Although there's native support to network graphics, it's much too low-level and just generally broken.

    ??? Please elaborate.

    Finally, as a lot of people often point out, most of the work in Free Software is still driven by imitation.

    Indeed. Open Source/Free Software isn't about radical new software concepts, it is about radical new approaches to software design. (Well, actually, they're tried and true approaches to software design, but don't tell the press that.)

    I don't think this is necessarily a Bad Thing. We steal from everywhere, taking what we like and leaving the rest. We then make sure what we have works, and works well. We do refine the concepts somewhat, making things more flexible and general, but we don't redesign everything. After all, you can try all the variations you want, but you always end up with a flat cylinder for the wheel.

    <BASH TYPE=Gratuitous TARGET=Microsoft> I wouldn't want to take away Microsoft's Freedom to Innovate, after all. ;-) </BASH>

    In conclusion: unlike the author, I don't see that the GNU/Linux world is going the Anti-Mac way, quite the contrary.

    I agree. And for more reasons then the ones you state. The biggest is that Unix folks are generally quite pragmatic. We stick with what works and don't try to accomplish radical new things that give us little benefit in the end. That is why we still use the same API Denis Ritchie and Ken Thompson designed. And it's why we still use X11 and the WIMP model. They work well.

    Let's take a look at the so-called "Anti-Mac" interface concepts, in the context of what is being done today.

    The central role of language

    Basically, what this boils down to is, a voice recognition and command system with limited scope. I think we've already accomplished most of this, and I think we've realized it doesn't get you much. All you are effectively doing is using your voice instead of the mouse and keyboard. That isn't a particularly efficient way to get things done.

    To truly get the benefits of the central role of language, we will need full-blown language recognition with not inconsiderate AI capabilities. Cliche as it is, the "Star Trek interface". And we're a long way off from this.

    A richer internal representation of objects

    Ironically enough, this is something Microsoft is pushing and doing, while Unix mostly ignores it. However, I can't help but notice that what MS has been doing seems to have the goal of tying everyone to MS Office. Not that that is a surprise.

    I think it is only a matter of time before we see some sort of standard interface to application-level attributes. I mentioned this in another comment, but I think this is best done in a separate userland library, not tied to any particular filesystem or desktop environment. GNOME and KDE are both heading in this direction, but I consider that too far up the UI food-chain. This should be done at a level only slightly above the standard C library.

    A more expressive interface

    This is pretty much a non-issue, IMNSHO. What the authors describe in the original paper are pretty much gradual improvements and evolutions which would work equally well with traditional the WIMP UI as any other. For example, the file manager in mumble GNOME-related project mumble overlays little icons over the main icon of a file, to denote things you can do with it. Or take MS Windows. It does the same thing with the "Shortcut" icon overlay. It also has that "View as Webpage" option, which is a horrid implementation (again, designed to lock us into IE, not really make things easier) of a generally good idea: A preview pane which provides useful info about a document.

    Expert users

    This is another thing that would work well in a WIMP system, too. Basically, it says: Ease of use is fine, but don't make the system easy to use for the complete idiot at the cost of making it hard to use for someone with an IQ higher then 60.

    It has long been observed that what the industry market-speak term "ease of use" is really several things:

    Ease-of-learning: How quickly can you jump in and start using a program? Mac GUI does well here; Unix CLI does not.

    Ease-of-use: Once you understand the system completely, how quickly can you get things done? The classic Mac GUI does poorly here, while the Unix CLI does very well.

    Ease-of-administration: How hard is it for the people responsible for maintaining the system to keep the thing working? (This applies more to OS level stuff then application-layer code, but I mention it here for completeness.)

    The "Expert users" part of "Anti-Mac" is simply the realization that one should not sacrifice ease-of-use for ease-of-learning. The old PC/GEOS environment for MS-DOS included a feature called "User Level", where you set the level of experience the user of the system had, from "Novice" to "Expert". Application programs could then adjust their presentation, hiding some features while enabling others. Unfortunately, you don't see this very often. (And, no, the "learning menus" of MS Office 2000 don't count. Everyone I know hates those things. That's not a good sign.)

    Shared control

    This is already being done, to a very limited extend, with things like Java and MS VBA. You let third parties provide some intelligence for your data. The problem is, of course, security. Java restores to running everything in (often poorly implemented) sandboxes, while MS VBA just ignores security completely. As previously stated, the hard part here is the complexity of mixed-mode security systems.

    So, in conclusion, I don't think "Anti-Mac" is all that radical (today -- things may have been different when it was written). Most of it is already being done, to some level or another, and the rest isn't going to be feasible for some time to come. If ever.
  • by Sir_Winston ( 107378 ) on Saturday July 22, 2000 @12:02PM (#913269)
    I wish CLI elitists would realize that the vast majority of people want a GUI that doesn't require and command-line parameters at all, ever. That's NOT to say that all CLI users are elitists, but it IS to say that there are many CLI users who not only consider it a more useful, better, and more "pure" way to interface with computers, but also believe as fervently as any medieval crusader that all users should either feel as they themselves do or need to be "educated" to have the same faith in CLI. These people are not just egocentric and want to force their own opinions on others, they are also actively dangerous to those who would like to see Free Software dominate the market for desktop workstations and home PCs as well as server and technical-user markets.

    How is it I can make these assertions? Well, they're all based on facts. The first fact is that the majority of people are spatially-oriented, rather than mathematically oriented. This has often been said of women, and is especially true of them; but it's true of most men as well. After all, what percentage of men major in math and computer science and sciences like physics which require extensive math, compared to the percentage who major in social sciences, language and literature, and sciences which aren't as math-centered (like biology, which requires some math skills but not extensive ones)? Math is simply not a priority skill for most people, but the problem comes when you have the geek community designing and discussing interfaces which are intended for non-geeks. The article in question, for example, propose an advanced CLI with language heuristics as part of their propesed new interface paradigm--but poll actual computer users, and you'll find less than five percent (pulling # out of ass, but I'd bet it's accurate) who'd be interested in having to learn such a CLI instead of using a standard contemporary Mac or Windows style GUI. The problem is, geeks live in a different culture, in which most of their friends and associates have far richer math skills and far more extensive desire to explore computers and interface with them more closely, than the average user or even the average advanced user. I manipulate literally hundreds of files of varying types every day, I configure applications and OS parameters very frequently using both graphical and CLI utilities, and do the type of work most geeks would say can be more efficiently done with CLI--and yet for me and for most other people, a CLI would be a cumbersome burden. Yes, you can do things faster with a CLI--if, and only if, you can a) type quickly b) remember complex sets of commands and modifiers c) count on those command sets and modifiers being standard across every platform you use, else confucion can arise, and d) have a good memory for where everything is located on you computer/network. But, if you type slowly then CLI is asking too much. If you have trouble remembering lengthy sets of commands, because you're not math-language-oriented or just because you have a bad memory, then CLI is a burden. If you use multiple CLI platforms with different command sets, it's easy for most people to confuse what to use with which environment (as contrasted to modern GUIs, which have comparatively subtler differences--a Mac user can learn Win98 pretty quickly, and vice-versa). If you have a poor memory for which files are where, a GUI is better for you because you get visual cues about your folder hierarchies and can list more objects at once than in CLI.

    This isn't to bash CLI at all, or to say it's outdated. There will always be technical people who speed along CLI style and love it, and that's great. But it's a fallacy to think, as far too many geeks do, that even power users of today who grow up on GUIliciousness will ever want to learn arcane CLIs. The archetypal example is the small but vocal number of people on /. who say things like "We need to stop screwing around with making KDE and GNOME look like Windows and Mac, because people who want to use Linux should know enough to open a CLI window and go at it." Linux and Free Software will never take the desktop market from the big corporations with this attitude, because people who say things like that are coding for themselves and not other people. That's fine, if you want Linux to be a niche market for sysadmin and technical types, but then most schools and colleges will continue to teach with Mac and Windows and home users will continue to use Mac and Windows because the opportunity cost of spending so much time learning/teaching CLI is arguably greater than spending $120 for a copy of Windows. But if you want to build software that'll free most people from corporations like MS, then you have to start thinking like a non-technogeek.

    That's where the authors of this article went wrong--they assume that as more people get more and more computer experience, that those people are going to want the richness and power of an advanced heuristic-based CLI mixed with the visual cues of an "expressive interface" GUI. But most people these days have the comfort and ease of use of an entirely graphical environment, which since we humans exist in a graphical world rather than a world full of letters and numbers and command lines, is far easier to work with than any CLI. Such users will never want to revert to a CLI, unless that advanced CLI gets to the point that it's as easy to use as the interface of the computers in Star Trek--where you just talk to the computer and it does what it's told in plain English. The authors of the article dismiss that as a "computational nightmare ", and they're right--so it's not going to happen any time soon that any sort of CLI will become popular, whether typed or voice-based, because of the ease of use associated with GUIs. The article also dismisses, as many CLI elitists do, the idea that a GUI can be as powerful as a CLI. Bullshit. It depends entirely upon the uses of the computer; even the best, fastest-typing CLI user would be entirely unable to sort through a folder containing several hundred files and separate them based on arbitrary criteria as quickly as a GUI power user could, because the GUI can list all the files and all their attributes and list them by whichever attribute you choose with a single-click, for arbitrary selection and transport. The article also makes the mistake of throwing out the ability for the user to sort his objects as he sees fit, into whatever hierarchies he finds most useful, in favor of multi-user standardization which doesn't allow you to work "outside the box" even on your own computer. Most people do NOT want "to share control of the environment between the user and other entities, specifically computer agents and other users"--they want to put things in the dir structures which are most convenient for them; this is especially and ironically true of power users, who like to arrange things for quick access in ways which multi-user system paradigms would not allow. The only way around this conflict of paradigms would be to have the user exist in a virtual environment in which he hass access to all resources in user space, while things outside user space are completely hidden from him--rather like the way Mac OSX will probably work; hiding the Unix system structure from the user, while nonetheless being Unix at heart with Mac behavior grafted on top. But then of course you'd alienate the true power users, who want to see and manipulate all that can be tweaked and changed to taste. So there's no way to win on this one.

    In short, until computing power is sufficient enough and code is bug-free enough to execute commands based on spoken language with few or no errors, the CLI will never make a resurgence in any form among non-techno-geeks. I myself am a geek, but not a mathematical-minded CLI-loving one. The reality we have to start getting used to is that only a select few will ever be using CLI extensively--some of you may not like that, but that's the way it is.
  • ... was the Newton OS. While it was still rather metaphor-laden (it *was* produced by Apple), the Assist button provided a command-line/text-adventure-game-like interface.

    For example, you could write in something like "send fax to dad," tap the Do button and it would open the telecomm program and take care of all that for you (sometimes). The things it actually understood were limited, but it was a paradigm shift in the right direction.

    While the Find feature worked on a similar level, it couldn't deal with a concept like "find my last letter to Laura," which would be something that an ideal anti-Mac interface should be able to understand, IMHO.

    Insert your own "Why did Apple kill the Newton" rant here. Or better yet - don't.
  • Already in the Mac altho manual or scripted...

    I think the point was, these things should be automatic and combine well. Manually bring up the properites of a file and changing the icon to be something else is a far, far cry from the OS automatically adding things to the existing icon to provide additional information.

    Not being sarcastic, just pointing out that what you can do isn't really close to the same thing. ;-)
  • by DragonHawk ( 21256 ) on Saturday July 22, 2000 @06:49PM (#913274) Homepage Journal
    Could we have Anti-Windows next? Take the basic principles of Windows design (slow, clunky, bloated, crash-prone etc) and reverse them and see what happens?

    Ironically enough, Microsoft is already doing some of the things talked about in the original Anti-Mac article. Iin traditional MS fashion, though, they are done not to improve the end user experience, but to lock us into their products.

    For example, MS Office already "extends" the file manager in Windows to know the "attributes" of an Office file, like the author and the date of last printing. (Lock-in: Only works for Office, though.) And Win98 has the "View as Webpage" preview pane, which is a horrible implementation of a good idea. (Lock-in: You have to use IE.)
  • by CoughDropAddict ( 40792 ) on Saturday July 22, 2000 @09:26AM (#913279) Homepage
    Do remember, however, that even though I drive a 1990 Toyota Camry, I can pop in ANY car and count on consistency. The gas will always be on the right. Steering wheel will always be right in front of me. Spedometer will always go clockwise smaller to larger numbers. Perhaps there are fringe exceptions, but part of the reason you only have to take driver's ed once is because cars are so similar in their operation.

    Cars are several orders of magnitude simpler than computers. If a computer was a physical console, it would be monstrous, there are an almost infinite number of controls that exist on your computer that aren't directly visible. How many programs are in your path at the command line? Even if you use a windowing system, how many buttons and menu choices could you access if you navigated through all the menus? If there's not a standard way of accessing resources not directly visible to you, using a computer other than your own would be like completely starting over.

    I'll grant that I don't know how involved the "user's own preferences" you propose would be. Preferences certainly exist today, in X more than any other system of course, but we're seeing that the price is complexity. Windows and the Mac are much easier to learn than X, because they are so standardized--you only need to learn things once. A new user obviously won't care (and should need to) what window manager they're running, what desktop environment this application belongs to, etc.

    You mention skinning, and I fail to see how amazingly innovative skinning is. Skins in their present form (XMMS, Mozilla) are nothing but different ways of presenting the same information. So the button is blue with white stripes instead of grey. Maybe the buttons are arranged differently, I could almost see how this could help target different groups of users by stressing different parts of the UI. But we're still in the WIMP paradigm that has been only slightly modified since the original Macs.

    I've thought about UI design changes a lot before. What I find more than anything is that it's very hard to think outside the box, because every computing task has been adapted to the current model, and so one involuntarily defaults to this mode of thinking. It's hard to imagine anything else.

    One thought I've had is that since programs are written by programmers, they naturally think in terms of low-level concepts such as files, directories, input, output, etc. Almost every Windows Application in existence has a "File" menu with "Open," "Save," etc. This is a wonderful standard to have in terms of consistency in UI design, but it spotlights the fact that we adapt the application to the low-level concepts that support it, and not the other way around. Are there applications for which a "file" menu is not appropriate?

    One thing I like about the article is that in looking for new ideas, it seeks to violate every one of the existing standards. Perhaps that's what needs to happen for innovation to take place. Also, in attempting to "think outside the box", we need to imagine different input and output devices. A mouse is well suited for the WIMP system, and relying on it as the primary input device somewhat locks us into it. What other kinds of devices could you dream up that could support other UI designs? A screen is the most standard output device obviously, and it's hard to fathom another, but again, what possiblities exist?
  • .. But my favorite UI was the ol' amiga UI.. Good blend of GUI & Command Line.. Now there was a machine you could get excited about.. I mean Golly They called it 'Girl Friend' for goodness sakes!

    (Thas' what 'Amiga' means.. 'Girl Friend')..

    Now don't get me wrong.. I like the Mac UI and all (I'm using it right now), but when I get all misty-eyed when I think of my old Amiga 500's and 2000's..

    -
  • ...because only idiots use pens, because pens don't require you to learn how to sharpen a pencil, and everyone knows that only people serious about writing use pencils, and only people that are too stupid to learn how to sharpen a pencil use pens.

    Ignore the fact that pencils break more and require you to sharpen them when they do, because that's all part of the writing experience. Ignore the fact that sharpening a pencil has nothing to do with actually writing.
    "I don't want more choice, I just want nicer things!"

  • An interface that exists which breaks all the ease of use and natural/ergonomic rules... We already have that.

    I use it. It's called emacs. :)


    Bah. Vi breaks way more ease of use rules than emacs. At least in emacs you can type like a normal person and you'll be ok until it comes time to save. With vi, if you don't know what you're doing, forget it.

    And yes, I use vi, and in my spare time I like to go "emacs-user shooting" :-)
    --
  • Hi,

    Today's a good time to learn a lesson in life in general. This one: the goal is more important than the methods involved.

    Here, see: I'll put some things in a row:
    - Object Orientation
    - GUI Design
    - Religion
    - Politics
    (etc.)

    Object Orientation:
    The goal is to build a program. Often the goal BECOMES: working around the problems involved in this design paradigm. (E.g. try to design a "grep" program using only true OO methodology.)

    But to reject the theory at all, only because it doesn't apply always, would be stupid as well.

    GUI Design:
    The goal is to let the user work with a program. Often the goal BECOMES: making a piece of bloatware and let the user live with it. (E.g. try to make a "grep" program for all those non-command line people out there.)

    But to reject the guidelines at all, only because they don't work in some situation, is stupid as well.

    Religion:
    The goal is to get yourself a reason of life. Often this goal BECOMES: get yourself a reason to die (E.g. holy war sessions).

    But to believe in nothing at all... Well, it works fine to me. Only keep smiling, OK?

    Politics:
    The goal is to rule the country. Often the goal BECOMES: discussing the manner once again.

    But to have no politicians at all... Hmm...

    And hey, here's what: you can fill in your own paradigms, guidelines, rules and dogma's as well! (Think of things as "Law", "Microsoft Certified", "freedom", "web-standards", etc.)

    It's... It's...
  • This really does make me sick, its all garbage.

    First rule: it has to sell to newbies.

    How about this instead,

    First rule: it has to scare off the idiotic newbies.

    Hmm, that sounds alot better to me.

    GUI? Why? How about a nice little text bar.

    You want to delete a file, heck, forget the trashcan metafore.

    Type delete

    copy?

    Copy

    Move?

    Move

    Format?

    Format

    Simple isn't it?

    Instead of a text file having microscopicaly sized text of in to "look" like a text file, why not just name it *.txt or *.doc?

    That is intuative, txt stands for text, get it, got it, good.

    Its simple folks, and I really can't figure out as to why people keep cooing over GUI's, GUI's are pieces of useless garbage.

    An OS should NEVER be based around a GUI, an example being the MacOSx. Look at it, OpenGL acceleration for opening and closing windows. Now THAT is bloatware! Also notice how long GUI's take to load, they need to load the drivers to load the drivers, sheesh! It really is quite ridiculas!

    This is espcialy true for a consumer based OS. Linux really isn't simple because it defaults to a complex setup (excuse me, but why in the world are you looking for scsi drives, I have no scsi driver, trust me on this one!)

    To finish this rant off, computers should do what I tell them too, not what they want to. A gui that pops up error messages is doing what IT wants to, an OS which doesn't crash at all is doing what *I* want it to. If a program has an error, dump the program and get on with the show, the computer should never freeze! (heck folks, I have kept DOS running for weeks without anything happening to it, and its a Microsoft OS!)
  • Not for the messages at least (it can't do all the other desktop tasks), but it can do exactly what you describe. Company I'm going to work for this week: www.com2001.com
    ---
  • > Now thats Science Fiction!

    Is it? Earlier this year, after hacking around with festival and viavoince and some turing test programs (MegaHAL springs to mind) I was able to hold a (very basic) conversation with my computer. If I asked, it could tell me what new messages were in my inbox, what the headlines were at the BBC news website and if a given user was logged on to a certain BBS. The megahal component also allowed a general conversation, which was usually extremely strange in nature, but often made some sort of sense.

    Now I'm certainly not the worlds greatest programmer, and this was extremely slow, mainly becuase it was a horribly hacky collection of shell and perl scripts and, in one case, used a state file to pass information between programs. The computer would taike about 3 times as long to formulate a reply as it did to say it. I haven't had the time to go back to it again- but if I can knock together something that does this, then a company employing people to work on this full time could quite probably do a *lot* better. Voice interface isn't nearly as far in the future as people believe.

    --
  • by WNight ( 23683 ) on Saturday July 22, 2000 @09:39AM (#913304) Homepage
    In some ways they say that modal designs are good, that you swim in a swimming pool, etc. Then they say that limited interfaces, where all you can interact with is what's provided on the screen, are bad.

    I think a modal design is a mistake, in almost all cases.

    A swimming pool you can only swim in is like a tractor you can only control with reigns... kind of limited.

    Why can't I use the appliances of the future in the pool? Already some phones are waterproof, and some radios/TVs. Why can't my laptop be waterproof, etc?

    So, why let someone else dictate the limits of a mode? I alt-tab away from install screens while they're copying files even though the program warns me that dire consequences await those who don't sit and watch the whole process. If the author of the program had their way (or their bosses/marketing department's way) I'd be forced to sit and watch this...

    It's clear, to me, that modal designs should be limited to the program that the mode is in. An install screen for Winamp shouldn't offer to let you play MP3s, it's just to install. The panel for Winamp shouldn't have an uninstall button, that's not part of the 'play music' mode. But if Winamp tried to control what I did with the system while playing MP3s I'd delete it in an instant...

    A mode is only as important as the program I'm running. The biggest problem Winamp could report is that an MP3 is corrupt... The biggest problem Photoshop could report is that a picture was changed on disk, etc. I want the program to halt what it's doing to give me a critical (from its point of view) error. But I don't want it to interrupt anything else.

    And big future of interfaces that I can see is to leave a stateless interface behind. In the copying to disk example, not only should the mac empty the floppy trash automatically, but it should say "You're out of room on the floppy" and pop up a file browser to allow you to make room. At least it should have a 'Go There' button on the dialog to let you deal with the problem instead of just 'OK'. The dialog of the future would have "Ok/Stop" "Go There" and "Retry"... It wouldn't be the end of your copy, it'd just pause till you made some space, either by asking it for a browser or by doing it yourself, then continuing.

    Cookies (etc) have allowed us to get past the stateless web page. Now you can have a shopping cart, or a slashdot profile, that reflects changes you made when you were last there and picks up where you left off. This is where we should be going. You shouldn't have to tell something what you want to do more than once unless you change your mind.
  • Can I ask why text files with XML structure are not good enough?

    Because it just pushes the burden of structure interpretation up to the application programmer. Ideally, when you're programming in a high-level language, you should be able to handle any objects in the system through a consistent set of primitives; the actual representation of your data on the computer should be implicit and left for the system to decide, unless the programmer himself specifies otherwise.

    I see no reason why the Unix security model in particular cannot be extended to support safe, shared control of user applications.

    The reason is that the Unix model (like most others today) is based on a "stupid" executive which, essentially, just goes around giving control to any piece of binary data which is tagged as "executable" and has the right headers (a.out or ELF). The web site for ETH Oberon [oberon.ethz.ch] goes into significant detail about an alternative executive model which is much smarter.

    I think Sun calls that Java. :-)

    Well, all of Java's essential flaws aside (and Ghod knows there are plenty of those :), I dare claim that an OS based on a Java VM would be far superior in all the aspects discussed here to our current crop of C/C++-based ones. (I'm not sure whether the JOS project [jos.org] for a Java-based OS is still going on... they seemed to have the right idea, kind of.)

    ??? Please elaborate.

    You obviously weren't here the last time Slashdot had a "X sucks/X rocks" flamewar :) If you're interested, you can check the archives for the last two weeks or so. (Evidently, I stand on the "X sucks" side :)

    After all, you can try all the variations you want, but you always end up with a flat cylinder for the wheel.

    But that's already assuming that the software you started out to imitate is comparable to the wheel. Maybe it's not; maybe it's comparable to those squareish rock wheels you see on "The Flintstones" :)

    And it's why we still use X11 and the WIMP model. They work well.

    Well, "works well" is subject to discussion. IMAO, "works just barely well enough and it'd be a damned shame to start all over" is more fitting. And what you call "pragmatic", other people call "worse is better". (Then again, this kind of philosophy, "if it works, however barely, don't fix it", seems oddly fitting for a man who recently claimed that systems research was over. Again IMAO, that's akin to (Lord Kelvin's?) claim in the 1890s (less than two decades before Special Relativity and Bohr's atom) that essentially all of physics had been figured out, or to Hilbert's call in the 1900s (three decades before Gödel's Incompleteness Theorem) for a mechanic universal theorem decider.)

    As for your comments regarding UIs (i.e., the part of the discussion which is actually on-topic :), I really think you should take a look at the unorthodox projects I mentioned in some other post. A look at Tunes [tunes.org]' User Interfaces Review page should also prove interesting.

  • by Chris Johnson ( 580 ) on Saturday July 22, 2000 @01:05PM (#913308) Homepage Journal
    In the interests of full disclosure: I use MacOS pretty much all the time, mostly for Internet use. Yet, I think there are some major flaws with _misinterpreting_ what works in the MacOS UI, and almost everyone working on interface design has seized on the wrong issues- 'metaphors' or interaction like 'Do you really want to delete that file forever?' and all that stuff.

    The fact is, there are some very important principles in MacOS that would apply directly to the 'Anti-Mac' interface- but, typically, do not. The single most important principle is structure.

    On MacOS, the menu nearest to the 'Apple Menu' (which is always there) is the 'File' menu. It contains 'quit' and other commands including 'open' and 'close' and 'new' if present. Why? No reason- there's no reason why quitting should be considered a file action. But the structure is the reason- not structure in 'this is how we force developers to code a certain way' but structure in that the user can quickly form a mental model in which there is always this menu that says File, and Quit is in it. It's not important that the menu be called file, or that it be in a particular place- or even that quitting must be a menu action! The important thing is that the user jumps to a conclusion which continues to be valid across the entire interface.

    The same is possible for text mode interaction- it just takes different forms.

    I wrote a program [airwindows.com] for my own use to remind myself of things. It's not very elaborate (now that I have a small second monitor I might try making it more elaborate) but one way it works is very important in the Anti-Mac style of interface. It runs off a list of events that's kept, and edited, just as a plain text file (something that could be edited by other programs too), and you enter a date or day and event and priority.

    The priority ended up being almost meaningless as a feature because there aren't loads and loads of events in the file- but it is important in another way because it is a 'punctuation mark' for the event data format, and that's where the 'user concept' comes in. Basically, the idea was to define a format which could be parsed quite forgivingly by the computer, but which did not get into full-on natural language parsing- instead, you are given one very simple rule. "Type date, then priority in parentheses, then the event text."

    That's it. Friday(2) Visit Fred. Or Thu(e)Dance Naked. Or 11/3(1)Third Of December Day. Or just "Go Fishing" (which will always display, until removed. There is an expectation for US date formats, but it's opensource ;) ) The point is that you can get very weird with the data (for instance with the weekdays, it uses a string comparison routine that is case insensitive) but there is an element of structure to focus and direct you- one that is very, very simple to remember. Date(priority)Entry, and the parsing keys off the parentheses.

    The reason this matters at all is because it's so easy with natural language parsing to get into a game of 'Do What I Mean', to get so seduced by the challenge of parsing weird inputs that you basically go, "Okay! The user's inputs have NO FORM whatsoever- complete freedom! That is the ideal interface!" That's a crock- the thing you want to do is present interface elements that must be learned, but the elements (textual, conceptual, graphical) are SO SIMPLE that they're nothing to learn. "Quit and file stuff are in File menu" is a very small piece of information when you think about how widely useful it's been made (do _you_ have a File menu in front of you right now?). "Date(priority)Event" is partly textual but it is just as easy to grasp and use- it's syntactically obvious, with the key concept being Parentheses- if you don't have the ( then it's a degenerate case like a quickie-note to be always presented.

    There are similar issues to HTML- the bracketed tags like B and I (yes, I know those are stylistic not logical tags) are easy to understand- yet I have so often messed up in Slashdot posts by failing to close a bold tag properly or something- just repeating the 'b' with no / to close the tag. This is possibly unavoidable because of the complexity of HTML, but it's an interface screwup because when typing a message the user does not use the tags the same way as the computer does. To the computer it is an opening and closing tag (entirely distinct entities). To the user the tags are used more like toggles: you hit (b) to turn on bold, and then hit it again to turn it off, only you have to remember to make it (/b). In the flow of writing it's easy to fall back on the opening form of the tag and use it as a toggle- which of course does not work. The funny thing is, it's as if the special closing form is used to make sure that you remember to keep track of the 'stack' of tags more easily- instead of having to count instances of a tag (which the computer can easily do), but if you do forget to keep track the result is just the same as if you treated the tags as toggles and failed to close them with the special closing tags. The result is a special form of the tag to be kept track of by the user, so that the user must be aware of not only 'entering/leaving styled area' but which way you're going- and if you don't correctly specify, the HTML screws up. Treating simple 'off/on' tags as toggles (granted, much HTML can't be reduced in this way) makes the mental model of the style setting much simpler- "hit (b) tag to switch boldface on and off", and suddenly you don't have to keep track of phase at all, only the opening and closing locations of what you want to stylize.

    There is no reason this capacity to define simple behavioral models can't be used to improve text-based interfaces, or even Unix shell. Of course, it's mostly not ;P you have certain things. Typing a letter causes it to appear on the screen! Hitting return executes commands (woops, unless you're in a subshell, then it's what, ctrl-D?) Arguments for programs take the form -a -r -g -s, unless it's -args. And so on- the problem is NOT that it's a text based interface, or that it requires learning, the problem is simply that so little of the learning is globally applicable- but that can change. Not with Unix shell, as it would be throwing away a lot of backward compatibility- but in any new text or hybrid interface that comes onto the scene.

    I'm picturing a sort of 'agent', but one that you talk to in clearly defined forms and not natural language. It'd be there ready for input or puttering along keeping an eye on things and you'd type Fiday(2)Get ready for job interview! and hit return, and because of the (2) it would know you meant to make an entry in the day-planner, and would figure out from context that 'Fiday' (a real typo that I actually typed just then, fixed, then put back for purposes of the argument) was Friday. You'd be working on something and would suddenly realise that you hadn't coded anything game-related in months and would type Games++ and hit return and it would understand from context that you were establishing a priority level in a simple 'geek code' sort of way- and it might also have the job of sorting your mail and would keep an eye out for 'Games' and sort it up higher based on this new priority level (which you may have just created by typing that). Or you'd read Joakim Ziegler's article and in a fit of admiration type Joakim Ziegler++++++, an 'item' that might never come up in email, but if it did you'd not only be given a priority boost, but would also be reminded that there was something about Joakim Ziegler worth remembering and paying attention to. And the ability to, dare I say it, 'intuitively' plug this things into the computer is the ability to not rely entirely on your own head for all that.

    But the important thing is not to devise the most sophisticated arrangement for such things, not to devise the most fiendishly clever way to get data out of sloppy natural language- the important thing is to come up with these simple strokes that can become internalised rules. I happen to think Fri(2)Friday Meeting is a very sensible 'form' for a reminder, because for some reason having a 'priority' there in parens makes sense for me and serves as a suitable 'punctuation mark'. By the same token, you can't go wrong with strings of plusses and minuses to indicate the changing of priority levels for a keyword or key phrase. Possibly the exclamation point and question mark can be used similarly- for instance, the 'agent' might react to OpenGL?(return) by a quick search of other entries in its databanks for things that match this. It becomes even handier if you say Rob Malda? and the agent gives you references including his email address and phone number because you'd entered them into your address book :) You might even end up with web search results if you set it up that way. The exclamation point might serve as an indicator of a command- Slashdot! would launch your web browser and point it at slashdot, email! might be set up to return the 20 highest-priority emails you currently have. The 20 would be a setting kept in some other place- the interface of it would not be about going email-20-all-mail-agents! (unless you really wanted to be constantly specifying all that), it would be about going email! and also being able to go set up email! and have the control panels right there. And of course the basic assumption here is that the exclamation point distinguishes running programs from other actions- it seems fairly obvious, and that is a _good_ thing.

    *g* time for me to start working on this I guess ;) it's sounding neat enough that I'm getting pretty interested in exploring it ;)

  • The first fact is that the majority of people are spatially-oriented

    No; most people are verbally-oriented. The ability to visualize and juggle spatial concepts is less common than the ability to describe, narrate, and request things. The special skill of the mathematician is one of expressing in symbols things that ordinarily can only be visualized. In other words, mathematicians (and good programmers) have the ability to formulate and utilize abstractions.

    Nonetheless, there is a lot that could be done to make GUI's more useful. Take a look at how limited most GUI's are in terms of their use of visual metaphor: click, drag, drop, click. Things that wouldn't tax a 3-year-old. You would communicate with a real visual interface by drawing, by gesturing, by manipulating a visual representation in much more interesting ways than merely clicking and selecting.

    No, today's GUI's are just more compact versions of the menu interfaces of yore, with nice pictures instead of words but with little else that utilizes visual intelligence. We use them because we don't know any better. We use them because, like earlier text menus, they take advantage of the fact that the human brain is much better at recognition than blind recall. We use them even though they make little use of the visual realm.

    MS Windows and MacOS have done the same thing to the GUI world that Unix did to the CLI world: they are "good enough" that no one has managed to step back and start from scratch, questioning the assumptions that were made 30-odd years ago at Xerox (for GUI's) and Bell Labs (for CLI's). It would be a monumental effort at this point, given the enormous inertia and mental stenosis that affects the industry. We should salute anyone with enough guts to try and take UI's beyond the mental world of the 3-year-old.

    -Ed
  • Okay, I'll concede: if the desired functionality is solely to allow user-level applications to have access to variable per-file attributes, and there's no desire for database-like features such as automated per-property indexing, "smart" reorganisation of disk space by heuristics, and whatnot, then it really makes no difference whether you implement the attributes on the file system level or on an userland library.

    However, note that I'm not advocating a traditional file system which happens to feature variable attribute lists, like BeFS; I'm advocating the replacement of the file system by a transparent persistent object database which happens to be implemented on the kernel level (although that distinction isn't very useful either, since most modern systems abolish protected mode entirely and leave the entire system open at run-time; see our other ongoing thread for more of this), and which also happens to feature variable per-object attribute lists. So it's a moot point, really :)

  • Already in the Mac altho manual or scripted...
    In order;
    1. Paste an Icon on it when you save it.
    2. Lable(Color) your files.
    3. Again Paste an Icon in Finder
    4. cmd-f, edit, click modified, play with list.

    Upshot is you have all these and several more techniques in MacOS now. To lazy to do that each time? Write an AppleScript to do any or all of these. Not being sarcastic just pointing out that you can do close to the same things.
  • While it sounds great, in principle, to have complete control over UI and controls, it can make moving between pieces of software much harder. I have that beef with programs like WinAmp, which go out of their way to "stand out" -- even at the price of usability.
  • is like saying that instead of voice-recognition, the future of PDA interaction is in being able to change the colour of the case!

    That's what Palm is saying...

    http://www.zdnet.com/z dnn/stories/news/0,4586,2605769,00.htm [zdnet.com]

    - Isaac =)
  • >If the UI isn't 100% efficient, it really doesn't matter

    If you use computers every so often, yes. But if you use computers all day, efficiency suddenly matters a hell of a lot, and consistency very little. Customisability becomes the most important thing, because people are more than just numbers in a "usability study", and find that different ways of doing things suit them better.
  • A) It works, why "fix" it?
    B) Even a novice can learn a computer in a week or two given a teacher. It takes at least a year of full immersion for a person to even grasp the basics of a foreign language.
    C) They are taking the power is mutually exclusive with simplicty track. That's not necessarily true. There are some UI choices that are very powerful, but exceedingly simple. I'd like to point to BeOS's Cortex, and application that manages media nodes on BeOS. (The BeOS uses a system where media data is passed between a series of nodes. For example, a microphone might be represented as a data producing node, a set of filters as both a data recieving and producing node, and the speakers as a data recieving node.) It has a graphical system that shows a graphical diagram of the connections between the nodes. It is both very powerful and efficient (connections can be quickly routed through any type of filter) and simple and easy to learn. A good UI is not one that has an overriding "good UI" method. It is one that is full of ideas like this implemented consistantly through the whole thing. Also, if the UI isn't 100% efficient, it really doesn't matter. What's matters is that it is relativly consistant and DOESN'T change often. For example, people put up with the relative inefficiency of French and English simply because they know. Analogously, consistancy is of the utmost importance. Despite the inconsistancies in English, the langauge is pretty coherent. Major concepts don't change on you from one word to another. That same consistancy is why the Windows and MacOS interfaces are so popular, not becuase they are well designed. On the other hand, the current state of Linux GUIs is like the language at my house. It is mix of English and Bengali, and though it is more efficient then either, people really wouldn't be able to learn it because of the lack of consistancy. (Unless of course they only speak a few phrases constantly, kind of like what most Linux users do in terms of software usage.)
  • the idea that the typical X setup is less metaphor-laden than Mac or Windows is absurd. Most file and window managers for X attempt to be as much like the Mac or Windows interfaces as possible

    If you don't ignore the rest of the article, the author goes on to mention that although file managers exist for X, and are much the same as file managers on any platform, that their use is not common. It is much more likely for a *nix/X user to use a console interface. In fact, most *nix/X users I know use X pretty much exclusively as a way to have 98 bajillion console windows open at once. Not very many people bother to go to the trouble to learn how to use *nix, and then go back to the old ineficient file manager way of doing things.

    As for launchers, a menu is a menu is a menu. I prefer to use the Root menu, and I abhore things such as the GNOME panel, and the windows "Start" menu. Why should I have to move my mouse to a specific part of the screen to access a list of programs? (Yes, I know there are utils for Windows that emulate the root menu... Diamond's video cards all ship with them)

    Anyway, the "typical" (read "power user") X desktop IS less metaphor driven than the typical Windows or Mac desktop. Not to mention being way more customizable, so it can morph into whatever form the user wants...

  • by Syn.Terra ( 96398 ) on Saturday July 22, 2000 @10:40AM (#913332) Homepage Journal

    The Anti-Mac interface reminds me a lot of the current BeOS interface, though with obvious differences. BeOS isn't there yet, but it's on its way. Point by point:

    Central role of language: Right now BeOS has a POSIX-compliant underbelly, which gives you the flexibility of a command-line interface alongside the normal GUI interface. The two are intermeshed practically seamlessly. While it doesn't have the fancy "interpreter" capabilities the Anti-Mac ideal proposes (like a spell-checker, which isn't such a bad idea) it IS a working CLI, so you have the strength of language alongside the ease of a GUI.

    A richer internal representation of objects: All files in the BeOS have what's called "attributes", which are little bits of meta data stuck onto each file. This means that while your MP3 collection has the usual name, size, modification date, etc., it can also have attributes of title, band, album, bitrate, length, etc. These attributes can be on any file, and are easy to impliment, so you can make a special type of text file complete with author, chapter, and page attributes, all of which can be utilized in queries and so forth.

    A more expressive interface: I'm still not entirely sure what this point means, but the BeOS has several key visual features, like distinguishable icons for folders (so your /mail/ and /people/ folders both look like folders, but you can tell which is which without even looking at the name). Little things like this make files easier to distinguish.

    Expert users: Again, the GUI is there (and is much, much nicer to use than any other GUI I've used) but the power of a POSIX shell is underneath it. You can get a lot done the simple way, or if you're willing to learn a little scripting or some tricks (BeOS Tip Server is full of them [betips.net]) you can get a lot more done a lot faster.

    Shared control: The BeOS isn't multiuser quite yet (should be with the addition og BONE in the near future) it's designed to be that way, in a typical UNIX style. Permissions, separate user directories, et. al.

    The BeOS isn't nearly the Anti-Mac interface, but it's the closest I've seen since 1996. Hopefully the key principles will be kept in future development.
    ---

  • by localman ( 111171 ) on Saturday July 22, 2000 @10:43AM (#913334) Homepage
    There is an excellent tidbit in the article about how people can always identify books even though all books don't look excactly alike. In fact, people are more able to find a particular book because different books look different. It seems that current systems don't take advantage of this to the level they could. Here are some quick ideas that could easily be added to current file browsers:
    • A file browser could show different icons for different sized text documents; a single sheet of paper for a very small file, a stack of paper for a medium sized file, and a book for large sized files.
    • Depending on a file's creation time, the color of the object could be different, say a color of the rainbow for each day of the week. Of course there would be overlaps like in real life, but the way the mind works, you might remember it was the red one, which would make finding things easier.
    • Files that were modified recently could be represented differently; for small and medium text files, leave a pen on top of the paper for a few days. For large files leave the book open.
    • As the last access time for a file gets further in the past, the icon could become smaller or more transparent - up to say 50% or so. The most recently accessed files would stand out, but the others would still be there.
    Obviously not all of these would work out wonderfully, but they would be interesting to see.
  • by extrasolar ( 28341 ) on Saturday July 22, 2000 @10:43AM (#913335) Homepage Journal
    It is always interesting when I read articles like theses. Especially when there are discussions afterwords. Because every so often there is a gem of an idea hidden somewhere deep in a slashdot thread. I have been looking for them gems for a long time now. I fear I may forgotten some of them, but still.

    So what is the the next GUI paradigm. I am not sure---indeed, how could anyone be. Some people are of the opinion that everything is going to end up on the web in HTML and Javascript. Please. That would be a horrible interface. HTML is meant as documentation markup language and Javascript was for those who thought it should be more.

    Then there are those who beleive we are going into the VUI era. While I am inclined to agree, in part, some people take it too far. There are certain things that voice is suited to and that is simple commands and queries. But even the crew of the Enterprise-D went to a console to get the real work done. I would never want navigate a filesystem by voice, it would take too much memorization (provided you are not at the computer) and would be much slower to what we have today.

    Then there are the advocates of the 3D interface. It is the logical succession of the 2D interface afterall, right? Well that is true but unfortunately it is not so simple. The eye can only see things in 2 and a half dimensions (the half because the two eyes can somewhat percieve distance) and any more than that then we might as well be wondering around a 4th dimensional maze (an exageration to make a point). A principle of UI design is that there should be visual clues as to the presence of interface objects but if such objects are behind other objects, how is the user suppose to know that they are there?

    So is the interface stuck? Perhaps. But we must consider what it is users do with their computers that requires an interface.

    Forget about the word processor and the spreadsheet. Personal Computers have been able to perform these tasks from the beginning, even in the age of DOS. People do email and browse information, often on the web. People play games. But this only the average user. The interface of the future should consider all users. This is perhaps the largest difference between the interface of the future and that of the past.

    Consider the tasks of all users. From the software developer to the web developer to the engineer to the secratary to the airport personel. The thing you should notice is that the interface must be different for each. One overpowering interface, in the future, will no longer do. In fact, the merits of the windowing enviroment seems to only be useful to a subset of all users. Certainly airport personel should not need a taskbar or a main menu with a list of applications when they would only need one application at all.

    But these kinds of considerations seem to go against many of today's principles of user interface design. As of now, I do not know how this can be reconciled but the Anti-Mac article does foster a clue. That perhaps the usability principles need to be changed. I like what the guy has to say. He questions every common convention in modern user interfaces. He puts us well on the way toward the next GUI Paradigm.
  • Because the concept of "what's an object?" can change in as many ways as the applications that generate them can.

    Um, IMAO, that's not a feature, that's a bug! One of the biggest flaws of Unix is that the only data type common to all applications at run time is the stream of ASCII text, and applications are left to infer all structure and meaning from that ridiculously low-level representation. CORBA attempts to fix that in the context of a static language environment, and fails miserably; so does Mozilla's xpCOM (sp?), although less so.

    I've posted about this before, some time ago. I made basically the same point, but since it was earlier I had some more patience and therefore made my point rather more carefully. Damn, I wish I kept a record of my anti-Unix posts on Slashdot...

  • While we're on the subject I have found thebrain [thebrain.com] an alternative UI I have found interesting.

    But Windows 9x only I believe

  • > Oh, and it's nice that you can sort files ascending by date in your favorite GUI...
    > one more option letter to ls.

    No, I said "by arbitrary criteria." By date, 1-click. By size, or type, or descriptor, etc. etc. 1-click. BTW, my GUI of choice is the Powerdesk file browser for Windows, by Mijenix (bought by Ontrack). With that particular GUI, there are many advanced options not available in the standard Windows GUI, including a slightly integrated and sophisticated file finder that could do far more complex things than "find all files in /var owned by Bob, larger than 300k, and modified in the last week. And rename them to lowercase or compress them or selectively change the permissions on them" just by ticking a couple of boxes and entering whatever size/date/whatever criteria you want to search for. Far easier for most users than remembering what command line to type to get those same results. If you ever have occasion to use Windows, I highly suggest you check it out; for rock-solid stabilty (well, almost) in Win9x, use the utility 98Lite Professional to do a clean install without the Win98 shell, and then install PowerDesk 2000 and select the option to integrate it into Explorer. The things you mention can be done very easily through the right GUI, or even from the stock Windows or Mac GUIs plus a third-party file finder utility.

    > Designing a GUI for that is probably a complete waste of time;

    It would be a waste of time, since some GUIs already do that just fine.

    > And if you compare *the existing tools*, it is very clear that CLIs win for power
    > and expressiveness on many tasks

    Sure, and I never disputed that. I just pointed out that there are also many tasks for which a GUI is better suited and faster, including many complex tasks that CLI elitists would wrongly assume a GUI can't do efficiently. Yes, CLI is more expressive--but infinitely more complex and requiring much more knowledge/memory than is necessary or desirable for most computer users, even power users, ever to expend. The CLI is fine if you want to invest the time to learn it right, but very few people are going to spend that time in this the age of the GUI. Macs were usually too expensive for home use, so the first time many people broke out of CLI was with Win95 (Win3.1 hardly counts)--so there have only been about five years of widespread GUI OS use on home PCs except for those lucky enough to afford a Mac before then. And yet despite this, the GUI pervades the home as well as the office now--how many strictly CLI machines are there in the home and office workstation spaces, percentage-wise? It's probably relatively small. Everyone has gotten used to the GUI. Kids are now growing up using GUIs instead of the CLI interface to Apple ][ machines that my generation had. And only a statistically insignificant number of computer users want to use CLI anymore. Those who do are the hardcore computer users willing to invest the time to learn the language, and in exchange for their time they'll get the ability to do many complex things quickly which sometimes outpaces GUI use and is always more impressive. But most users don't want to invest that time, they want something graphical and easy to use that will do whatever they tell it to through clicks and ticks. I'm just trying to get people to realize that the days of the CLI are over except for the hardcore power users who want to get closer to the hardware, and that isn't most people. That's relatively few people.

    > manipulating a large set of multifaceted objects in a scriptable fashion is a CLI task

    Sure. Like I said, there are plenty of things a CLI is better at than a GUI--and vice versa. For instance, if manipulating that large set of objects is not a scriptable task, but one that will vary from day to day according to various factors, then a GUI is going to probably do it better/faster. This is especially true if the content of each object must be determined before the object is dealt with; a power-GUI like Powerdesk can gice you quick and painless previews of every object, even rare document types; a CLI could never do that.

    > So if you want to talk about "the vast majority of users", fine.

    That's precisely who I was talking about. The small percentage of CLI users notwithstanding, most computer users will be using GUIs for the foreseeable future, and the less CLI each one requires the more successful it's probably going to be. That's a simple truth some of the more zealous people in the Linux community don't seem to understand, and it's hindering Linux's move to the desktop and workstation market. I want MS to lose as much as anyone around here; I'm just trying to further that cause by expressing that many Linux people are virtually aiding Microsoft's desktop dominance by insisting that GUIs are bad, inferior, mistaken, or otherwise shouldn't be implemented as a priority in their current form. Linux is winning the server battle, but it'll lose the OS Wars at large unless it becomes more GUIlicious. Fortunately, KDE and GNOME have picked up since last year, and are making great progress; but there are still those who belittle those projects, even though they're the only chance Linux seems to have at the moment of winning over the hefty Windows market. Even so, CLI configuration tools and arcane config files are going to have to get graphical counterparts to manipulate them if Linux is truly to win on the desktop and workstation fronts.

    > But don't give me nonsense about women being more spatially oriented than men
    > (they use landmarks more often and cardinal directions less often; thats locale
    > memory in action, not spatial modeling)

    Well, it's not "spatial modeling" because no one is going to sit there and compute a spatial model of the real world to give directions, when they know already the layout of the world *in spatial terms*. Again, like most geeks you are assuming something mathematical when instead I had a more generalized concept in mind. "Spatial" as in the general relationship of objects to one another. Why the heck use a useless psychobabble term like "locale memory" when it's clear that I'm talking about the use of actual physical objects and their *spatial relationships* to express something, rather than using strict mathematical modes of expression. I stand by my use of the word "spatial" and would add the word "geometric" as well to the description of the way most people, especially women, perceive the world and give directions. "Geometric" as opposed to "algebraic"; both are mathematical, but algebra is representative of the abstract in mathematics while geometry is representative of the concrete, of spatial relationships and easily recognizable physical shapes. As an example, if you've ever studied chess extensively, you realize that chess is a very mathematical game, and that words like "geometric" versus "algebraic" are common descriptors in the literature; but the best players are usually the ones with better geometric skills, people who can concretely understand spatial relationships on the board rather than having to think about moves algebraicly, calculatedly. Maybe that clears it up, or maybe it muddles things further, but you probably know what I mean by now since you didn't get my use of that term the first time around.

    > and don't tell me that GUIs are absolutely better for manipulating data that is
    > not inherently pictorial

    No, they're not absolutely better, and I didn't say they were. But on balance they have as many strong points as CLIs do, and when well designed they're almost as powerful and versatile, and they're definitely easier to use and master. And they're the future, CLIs are the past and will be relegated to use by increasibgly fewer people.

    > Show me a sysadmin that never uses a CLI and I'll show you a sysadmin that can't get
    > the job done.

    I was never talking about sysadmins--thet's the problem, right there. Even when you're talking about "most users" or "the average user" or some such, there are some people in the geek community who will relate the issue to sysadmins or advanced users or people who are otherwise not even being discussed. It's also narrow minded to assume that a GUI is automatically inferior than a CLI--the only reason some people want to think so is that CLIs are arcane knowledge, while GUIs are common knowledge. That's just silly elitism. Also, the one thing I think is holding Linux back from taking over computing is the fact that many of its coders write for themselves, not for users at large--they write for other CLI users, and sysadmins, and not for the other 99% of computer users. And that's fine, if you want Linux to be the best server OS, but to never make a dent in the home and business desktop markets. But I don't think that's what Free Software is all about. I don't think that Free Software is about coding something just for your own use; it's about sharing something useful with the rest of the world. I'm not saying that anyone in the Linux community should code only when the product will be useful to others, of course it's about doing what's right for you, and coding whatever the hell you want--it's your time and effort, after all. But what I see are some people who actively discourage creating an easy-as-Windows GUI, and that's just plain selfish. It's also quite shortsighted, since whoever wins the desktop will win on the server as well--if Windows.NET becomes this huge force that MS hopes it will, then ISPs and ASPs can easily be forced into buying Windows Server and Advanced Server in order to keep up with implementing all the whiz-bang ways of interfacing between client and server which Windows.NET will undoubtedly implement in closed, proprietary form. And that wouldn't be good for anyone, even the elitists.
  • If you're using Windows, Windows Commander [ghisler.com] can already do all of the things you've mentioned. Currently, it only applies to the file label's color and not the icon, but that may change in the future.

    Maybe some other OFM [softpanorama.org] for Unix has similar features available...

    --

  • It's very difficult to predict the future with any kind of accuracy, especially in the computer world, but I think that this article does a surprisingly poor job of it. The whole point of computers is that they are tools that we can use to accomplish other things. There seems to be a strong belief in the Linux community that the computer itself is the whole point, a kind of self-fulfilling effort. I don't mean this as a flame at all -- I'm just pointing out something that I see in this particular community.

    The clear general trend, on the other hand -- from UNIX to DOS to early Mac and Windows to more recent Mac and Windows and eventually MacOS X -- seems to be towards more intuitive interfaces that require less and less specific computer knowledge. There will always be some people who need to know their computers inside and out, but the vast majority of people will want to spend all their time using their computers to accomplish specific tasks.

    The article that discusses how the Linux community has embraced "anti-mac" interface elements seems to be a little one-sided. In most ways that Linux programs have evolved since 1996, Mac and even Windows programs have evolved similarly. MacOS X is really no different from KDE and other windows managers in terms of analogies used.

    This topic is a little perplexing to me -- I cannot argue with the specific facts and examples given, but I don't think that the reasons or interpretations offered make any sense.

    smm
  • > You harp on this 'spatial intelligence vs abstract intelligence' thing quite a
    > bit, and it is rather irksome, since its a well-known fact in educational circles that
    > advanced spatial and geometric skills go hand in hand with mathematical ability and
    > abstract thought.

    Not at all true. While it is true that the most intelligent individuals have both forms of intelligence in great measure, very few of us are at or near that level. 180 IQs are unfortunately doled out quite sparingly at the factory. The majority of people, even the majority of people in the 120-150 IQ range, are quite better in one area than in the other. You can dispute this all you want by calling your idea "fact," but if it is indeed a fact then point us to a source. It's certainly the opposite of what I was taught in college.

    > As a previous author has pointed out (and you pointedly ignored), most peoples'
    > stengths lie in narrative

    Yes, of course they do; I really wish more people had strengths in the area of reading comprehension, though, since I didn't disagree: I said that he was missing the point, that narrative requires descriptive words and those descriptive words can be of two types, mathematically (algebraicly) abstract or spatially (geometrically) concrete. That is the important thing to note, since people cannot actually narrate in natural language to their computers. To narrate in the mathematically derived languages of most CLIs is fundamentally different from narrating in plain English; CLIs require a certain mathematical precision, while natural language is more flexible and intuitive to most people. That was my point. And until the computer can understand a plain natural language (English, let's say) CLI, the CLI will always be inferior *for most users* to the GUI. The GUI is equated with spatial, geometric terms, while the CLI is equated with abstract, algebraic terms. The classical example--I forget who said it--is the notion that CLIs require much knowledge internal to the user, but little on behalf of the computer, while GUIs require much knowledge internal to the computer but little knowledge on the part of the user. But it boils down to this: GUIs give the user concrete spatial and geometric information about the objects on his system and their relationship to each other, while CLIs require that the user already possess this information since he gets far fewer visual cues as to how objects are related. *Now* do you grok it?

    As for your last bit about how a spoken-language CLI would be more natural and useful--you should have read the post with which I started this thread:

    > In short, until computing power is sufficient enough and code is bug-free enough
    > to execute commands based on spoken language with few or no errors, the CLI will
    > never make a resurgence in any form among non-techno-geeks.

    Now, doesn't that sentence imply that when computers can understand plain English, then the CLI *will* make a resurgence (but until then)? Yes. So, I already said that and agreed with you. However, as the authors of the article referenced in the story noted, parsing natural language is a "computational nightmare" with present technology, so it's not going to happen any time soon. So, again, GUIs will be the paradigm used by almost all users, not CLIs, *until* computers have a CLI interface comparable--as I wrote earlier in this thread--to those used in Star Trek, in which users just tell the computer what to do and the computer just does it. But that's a long way off.
  • Try looking at MUD's. A system like LambdaMOO is a strong language interface example. You could abstract this to a Unix system by allowing all of it's users to communicate with each other, agents, and the computer itself. Using a system of XML meta-documents for each file on the drive that could be manipulated through this interface would be useful. You would want to make the system use the under-lying users/groups foundation for security as well as possibly adopting somethings from the MUD's. Another bonus from making it a sort of shared-shell enviroment would be the fact that you could execute any program on the system that you could usually access and it could work as well as having special programs (written in your choice of languages) that was MULO (Multi-user Language-oriented) specific in the way X programs usually only work in X. This shouldn't even be a hard project to do for anyone with enough time to do it. Write a small daemon to act as the server and some sort of client program. It shouldn't seek to replace the command-line but only to compliment it.
  • The XMLterm web page says it best; It's a terminal .. It's a web page .... It's XMLterm [xmlterm.com] .

    A GUI CLI, whodathunkit?

    While it might not qualify as a pure example of an Anti-Mac interface, it could easily be mistaken for a transitional form. The screenshots [xmlterm.com] from the web site tell the story better. Pay special attention to these two;

    Graphics with text CLI [xmlterm.com]

    Collapsing output from the GUI/CLI like a folder list [xmlterm.com]

  • Skins are NOT a solution to user interface design - instead they make the problem worse. The entire point of a GUI is that *all your applications look and act the same way*. This is the true problem with Unix desktops right now - they often look terrific but are utterly unusable by anyone because even basic things like scroll bars work differently between different apps (or the same app with different skins). Enlightenment is the most obvious example of this, but it's far from Raster and Mandrake's fault - they just followed the trend to it's awful conclusion.

    People bitch and whine that KDE and GNOME look "ugly" or "plain" and act like Windows or MacOS. That's *GOOD*. Damn it, it's *GREAT!*. It means that those people trained on 98% of the desktop machines in the world can operate *IX without being retrained. It means that all of us who are forced work on Windows at work and choose Linux/BSD/etc at home don't have a paradigm shift twice a day. It means that if I download a new KDE or GNOME app and install it, I already know how to operate it's widgets (if not it's features).

    Bowie, whose wallpaper I love and use, tried to make a car analogy. Analogy this: Almost every car on the planet, from Kia to McLaren, has the same basic controls in the same places that work the same way. Skinning is making it possible for people to build the equivalent of cars with the brakes controlled by a wheel in the trunk, toggle switches in the glovebox for acceleration, and steering with a trackpad on the floor. In other words, it may well look great and stylish, but I sure as hell don't wanna be sharing the freeway with someone trying to operate such a thing.

    Skinning works best in the Winamp/XMMS case, where the controls are always in the same place and work the same way, but you can customize what they look like (this is a much better car analogy). I'm fine with that - if my XMMS has AmpMan LCD as the skin and yours has Pioneer ReplicAmp both of us can operate each other's XMMS without any undue pain. Just like it is with cars.
  • Do remember, however, that even though I drive a 1990 Toyota Camry, I can pop in ANY car and count on consistency. The gas will always be on the right. Steering wheel will always be right in front of me. Spedometer will always go clockwise smaller to larger numbers. Perhaps there are fringe exceptions, but part of the reason you only have to take driver's ed once is because cars are so similar in their operation.

    How do you operate the cruse control on a 1992 Saturn? On a 1999 Volvo S70? What about the Volvo S80? The 1998 Honda Civic? How do I adjust the side view mirrors (this is diffrent from left to right mirror on the Saturn even!)?

    What does the picture of the baby on the S80s control panel do? What does "0 more messages" mean? I gave it my CD, how do I get it back? How do I get the top back down on the C70, it's raining! Dammit, where the hell are the controls for the lights! And the damm wipers!

    Cars are a lot less consistant then people think. The three cars I have owned in my life have been more diffrent then Netscape and Internet Exporer. Sure the gas and breaks are in the same place, but even they are diffrent. Pedels had a diffrent feel. They all were very diffrently responsave to slamming the pedel down hard. The stearing was very diffrent in my first two cars (fully manual in my 1979 VW Rabbit, hydralic assist in my '92 Saturn). If I tryed some of the crap my C70 lets me get away with in my Saturn it would come off the road.

    Consistancy. Bah!

    Things in the real world have a lot less consistancy then the Mac user interface guidelines mandate. Even the connonical example of cars.

  • Its tname was SAVVY. It was on a peripheral card that plugged into an Apple and included a semi-AI Language parser. You interfaced via a command line box, just like VFP programmers do today. It was built upon a version of Forth. It learned as you used it and became very comfortable. It included a database and the programming language was a threaded and extendable scripting language that was Forth-like in its look and feel. Writing applications using it was child's play. Folks who had never coded before in their life was using SAVVY to computerize their businesses. A software version for the IBM PC was made. I demonstrated it at the 1983 Comdex in Las Vegas. Two things killed it. The inventor, James Dow III (great grandson of Bell Starr) was paranoid about patents and used on-the-fly encryption to decode the SAVVY system code, so he had a hard time getting VC's to fund him. Secondly, it was intially designed as a stand alone system on a single PC. Redesigning it for a multiuser environment was either more than he could handle or he was unwilling to share his secrets. Had he carried it to multi-user networking I'd probably still be using SAVVY today. It's ability to interpret your requests was uncanny, even when you misspelled words or got your syntax wrong. True Anit-Mac UIs will come when the CPU speed and memory are about 10 times what they are today, and the hardware will generate virtual 3-D spac, with tactile response, giving the user the illusion they are on a holodeck. Bots, represented by human Avatars, will be assigned tasks which they wander off to do, returning to report when they are done, just like real people, but with no payroll, sickleave, vacation time or unions. True, obediant servants that everyone will own as many of as they wish. By then' I'll be dead.
  • The pro-Mac principles target the wretched masses who are so technically challenged, no? It seems to be a matter of defining what proportion of the market will favor utility vs easy-of-use. So the question becomes, how much is the average person willing to educate themselves (ie how much patience and brains do they have) to use an interface at a certain level? I don't think there is an average, it's too segmented to be useful here. Carve the market up, that's my solution.
  • by Shoeboy ( 16224 ) on Saturday July 22, 2000 @02:33PM (#913375) Homepage
    I have the Anti-Christ interface on my box. All buttons are in the shape of inverted pentagrams. The user interface is best described as non-euclidean madness that appears to emanate from somewhere behind the screen. The context sensitive help contains passages from the necronomicon and the tech support number appears to be manned by crazed minions of Nyarlathotep.

    It's not terribly usable, but the increase in speed now that I've started mousing and typing with the tentacles growing out of my face more than makes up for the lack of a consistent desktop metaphor.

    --Shoeboy
  • AOL did the same pretty well, at the cost of absolutely no compatibility with anything not-M$...

    You do realise that AOL started out Mac-only and remains compatible with Macs, thus supporting something non-MS? There's prob. no real reason that the AOL software could not be ported over to Linux--after all, they run it on two platforms now; if it's been written properly a third should be relatively easy (famous last words...).

    I think user interfaces will get dumber and dumber from now on--most mac/windows users can spend a lot of their time without a single keypress.

    Non sequitur. I believe that it's considered a benefit that things can be done with the mouse, as long as it is faster than it would be with some other input method. The problem is when, due the extreme complexity of certain programmes (M$ Office, do you hear my cry?) it can take much longer to do something--using any input method--than in a well-designed programme.

    As a Mac and Linux user for years, I fail to see how the Mac interface is dumbed-down. It's no more dumb than any other appliance; I don't have to adjust DIP switches on my toaster or resolve attachment conflicts on my mixer. Nor do I have to type anything on my stereo--I just push the purty buttons and it works. For some reason this is desired in some appliance and decried in others such as computers.

    The goal of a good interface is to have layers of complexity. The top-level interface should be very simple, but it can be peeled away and a more complex interface revealed, which can be further rolled back for even more expressive power in return for a hefty learning curve. Something like what Apple is doing with Mac OS X could be the right track--a nice consumer GUI over top of a Unix shell, over top of a nifty kernel.


  • I agree with the article in most respects, but I have a different take on where it's all going. IMHO, the future of UI design isn't in global adherence to a standard dictated by a company. The future of UI design is where applications adhere to the user's own preferences. That seems to be the logical direction of things right now (well, the last few years at least).. See, everyone wants cars, but some people want Ferarris, and some people want Toyotas. We're in the Model T era of UI design. Little black boxes with wagon wheels.

    There will be an article in next month's Wired regarding the skinning scene. If you want to see where real UI design is going, have a look at it. Makes a good compendium to this article.

    My $0.02, (and Propaganda is open, btw. Click the link)

    Bowie J. Poag
  • Brilliant! These ideas really need to be tried out. Now where did I leave the source for gmc...
  • by Catroaster ( 176308 ) on Saturday July 22, 2000 @08:52AM (#913380)
    Could we have Anti-Windows next? Take the basic principles of Windows design (slow, clunky, bloated, crash-prone etc) and reverse them and see what happens?
    Sounds good to me :-)
  • The Mac is not a bda computer, and I will not hear non-mac user complain more about this.

    But my point:
    GUI is not the future. Real conversation is the thing. Combine a computer smart enough to have a conversation with a WWW filled with advanced standarised meta-data tags, and you have it.

    "Good morning Knut, you have 5 new mails today."
    "Ok, read them please"
    "Mail #1: from your girlfriend, she wonders why you didn't show up last night"
    "Mail back: Sorry babe, I have two other girlfrinds to attend, got to dump ya'"

    Now thats Science Fiction!

    - Knut S.
  • > No; most people are verbally-oriented. The ability to visualize and juggle spatial
    > concepts is less common than the ability to describe, narrate, and request things.

    You're missing the point. How do you "describe, narrate, and request things"? Do do so requires words which are descriptive somehow. Either those descriptors can be based on abstract mathematics, or they can be based on more concrete geometrical or spatial information. Spatial and geometric information is easier for most people than abstract mathematical information: if I say "The gas staion you're looking for is three and a half kilometers down that road, then 2 kilometers to the left, then seven kilometers south," you'd have to know what a kilometer is, what south is, and process the abstract math associated with the directions. Many people (in the States, at least) wouldn't know what distance a kilometer represents and some wouldn't be sure how to go south. Contrast that with concrete spatial data: "The gas station you're looking for is down that road; turn left onto Sycamore Street when you pass the Wal-Mart, then turn right onto Elm Street when you pass the bridge." Those directions are far easier to follow for most because they're concrete: they require no specific knowledge except how to read signs, and nothing is abstract. The latter directions are therefore easier and more universal for most people. A CLI is more like the directions based on kilometers and polar directions, while a GUI is more like the directions based on visual cues.

    > The special skill of the mathematician is one of expressing in symbols things that
    > ordinarily can only be visualized.

    Or, as with a GUI, you can just visually display whatever you're working with instead of making it into abstract mathematical-language-based symbols. What most computer science people fail to realize is that ther *is* a huge difference between mathematically based languages and natural verbal languages. Verbal languages are learned as children, and describe a world we can physically see and be a part of; mathematical languages descibe an already abstract world of data which we cannot physically be part of, and therefore it's much harder to learn a mathematical language for most people--especially since our ability to quickly learn languages is decaying by age six and it's progressively more and more difficult to learn languages after that. But a GUI puts data into easy-to-understand and easy-to-experience visual cues which are intuitive and require no special skills or language. Even the elderly can successfully learn to use a GUI, while learning a CLI is far less intuitive.

    > In other words, mathematicians (and good
    > programmers) have the ability to formulate and utilize abstractions.

    There's no need for abstractions when the GUI model makes everything concrete and easy to understand and work with.

    > We use them because, like earlier text menus, they take advantage of the fact that
    > the human brain is much better at recognition than blind recall

    That's exactly right, and there's nothing wrong with that. Design a better system which allows such intuitive interaction, and then we'll talk. Until then, there's no use in trashing the paradigm which you just admitted works best for most people, out of all currently existing systems. I'm more productive with a GUI, and so are most people. I can literally operate my computer while I'm half-asleep, because I instantly recognize all the visual cues instead of having to remember an arcane string of commands and modifiers. So, my active thought processes can be better used in solving whatever task is at hand, rather than in trying to conjure up the arcane language of a CLI to execute whatever it is I decide I need to do. It's more productive to think about the tasks you're performing than to have to think about how to get the computer to perform a given task. In a GUI I click something and tick a few check boxes or what-not, whereas in a CLI I'd have to type something and then remember what to type to modify that command in a particular way. Waste of brainpower.

    > We should salute anyone with enough guts to try and take UI's beyond the mental world
    > of the 3-year-old.

    Umm, if computers are easy enough for a 3 year old to use, then so much the better, because computers are a means to an end not an end in and of themselves. I love tech, I love buying new video and audio cards and RAID cards and DVD decoders, I love configuring my system for the best performance and for my own needs. But I don't buy those things for the computer's sake, I buy them for my own. I buy them because there are things I want that hardware to do. Ultimately, a computer is just something we buy so that we can access the Internet, USENET, write, read, play games, compile programs, communicate with other people, listen to music, and any host of other uses. So, why shouldn't we make each of those things and all others as easy to do as possible? That's what I don't understand about some CLI supporters who are so unyielding as to be elitists. Why shouldn't we make computer use as easy and seamless as possible? Ease of use doesn't have to mean lack of power, and indeed most GUIs are more flexible and forgiving than most CLIs, though GUIs which aren't optimally designed can take longer to perform some tasks (but not all). The goal should be to make fast, efficient, easy to use GUIs, trim down some of the fat and make the code lean, not to try to keep alive the dying CLI which few people use outside of technogeekdom (and quite a lot of geeks like GUIs, too). The CLI will stay underneath, where gurus can get to it easily, but on top there needs to be a fully-functional and easy to use GUI, which Linux still lacks at this time even though it's getting closer very quickly. Until Linux requires no more CLI use as a requisite than Windows does, though, it's not going to win on the desktop. I hope it does, but the more people complain about GUIs and desktops, the longer it will be until Linux can take over from Windows. And again, people complain about the current GUIs, but they never code anything better. Do, don't just say. I look forward to Eazel, though, I can tell you that much.
  • While I'm inclined to believe all of the Anti-Mac/X-is-evil/GUIs-are-good/Voice-is-the-futu re stuff, I have a hard time getting over the UNIX development model. When we first started using computers for important work, the tools and interface developed over years worth of user input - grep got all of its features because at one point someone actually NEEDED that feature. X is a part of that, in that it developed out of what people needed.

    If the Anti-Mac interface were so compelling, there should be someone making one. Could it be that its not quite as terrific as we expect? Could it be that implementing a whole bunch of the Anti-Mac interface is just to different, to difficult, that the current model will suffice for a long time?

    Which brings me to X, which is the Anti-Anti-Mac, because while your window-manager/session-manager, etc. might do neat stuff, X is only concerned with where the mouse is (focus), and what the keyboard does. And despite the fact its an ill-suited interface paradigm, we all still use it, and it still evolves because of our needs (witness the growth of 3D acceleration in XFree86 4). While its still an incremental imitation of 30 year old design, it evolved with us.

    The short truth is that good user interface, like any other tool, evolves with the needs of the user, not with the desire of the planner.

  • Read the original Anti-Mac [acm.org]. They state:
    We should state at the outset that we are devoted fans of the Macintosh human interface and frequent users of Macintosh computers. Our purpose is not to argue that the Macintosh human interface guidelines are bad principles, but rather to explore alternative approaches to computer interfaces. The Anti-Mac interface is not intended to be hostile to the Macintosh, only different. In fact, human interface designers at Apple and elsewhere have already incorporated some of the Anti-Mac features into the Macintosh desktop and applications. The Macintosh was designed to be "the computer for the rest of us" and succeeded well enough that it became, as Alan Kay once said, "the first personal computer good enough to be criticized."
    and
    In this article, we explore the types of interfaces that could result if we violate each of the Macintosh human interface design principles.
    Hence, "anti-mac".
  • Statistically speaking, the expert user is a myth. 99% of all people do not grok computers, nor do they have the slightest interest in doing so. They understand the real world and have difficulty with many abstractions that do not have real world analogs.

    Natural language interface? Hoo, Hoo! That's a good one! Most people have a difficult time mastering their own native language that they speak every day. I regularly meet people with whom I have some difficulty in simple conversation due to language barriers, people who natively speak my own language (English). And there are significant subcultures which seem to take pride in significant lack of language skills. Would a natural language interface marginalize these groups? Who gets to decide what is "natural"? If you go by any sort of official language rules for English, many US citizens will have difficulty with it.

    I agree that the GUI as epitomized by the Mac is very limiting for some aspects of computer interaction. My own conjecture is that eventually there will arise an unnatural language with which we can interact with the unnatural computer world. This language will still have metaphors and control paradigms, but these will not have direct real world comparisons. They will instead be wholly appropriate for expressing and manipulating the many concepts of the digital world. And I strongly suspect that many elements of today's GUI design will still be intermixed with that interaction.

  • The point of automobiles isn't automobiles, it's tranportation. You still have to put some time and effort into learning how to drive. A computer is a tool but it's a complicated tool.

    This is a stupid analogy. What of anti-lock breaks? Automatic transmissions? Cruise-control?

    If you look at the evolution of most consumer products, including computers, it is obvious that a large goal of the creaters is to improve effectivness as well as the learning curve to the product.

    It's not at all unreasonable to for consumers to ask that the technology they want to use to be intuitive and easy to learn. Take the classic example of VCR clocks. ...The one I bought 4 months ago set its own time when I plugged it in. I don't know how nor do I really care. It works and that's one less thing I have to waste my time learning how to do.

    I get sick of this "hard" attitude--that you need to somehow put a certified amount of time into learning a tool before you can use it. It hinders productivity.

    -Ty

  • by Dr. Spork ( 142693 ) on Saturday July 22, 2000 @11:03AM (#913393)
    The original article was at least nicely written, though I think quite misguided in the handling of the concrete examples they discuss. Sure, their PRICIPLES sound right, but when it comes to actually proposing concrete changes, they all sound like steps backwards.

    For example, the proposed motivation for violating the Mac "Forgiveness" principle was weeak! No one gets mad when they have to click through one more warning in an unprobable situation like the one they describe, if they can in exchange feel secure that all of their actions will be reversible for a while.

    In fact, one problem with Windows, and to a lesser extent Mac and Linux, is that the reversability principle that falls under the heading of Forgiveness is constatnly violated. When I install something into Windows from the internet I have to pray that it has an uninstaller, because it sure as hell isn't going to tell me where it is putting all of its files, and what lines of the registry it is modifying. And even most GOOD uninstallers don't fix your registry. RPM and DEB are a pretty good solution to this in Linux, but how much fun is it to uninstall WordPerfect 8? Or KDE? I have a feeling no one really tries; we just install fresh, because by the time we muck up our drive enough, our favorite distribution has released a new version. This does not cut it in the way of reversability.

    In short, there is nothing to be proud of in violating the Mac design principles. It might have a pleasant "look, I'm a bad boy!" feel to it, like using profane words or screwing some chick in the butt used to have for me when I was in grade school--but come on, people, let's grow up!

    Probably the best way to accomodate to new and advanced users at the same time is to start with an intuitive interface which is hard-core customizable. And I mean more than just E or something we have now. This is where Linux beats the Mac interface--because the Mac people are too afraid to let users customize the interface (and it looks like this won't change much in OS X). For tedious operations we write scripts, and for all I care, if they program some friendly voice that helps me write it, that's fine with me, as long as it doesn't remove any power from what I'm able to do. The nice thing about this "newbie default, rich customization" attitude is that it's self-regulating. Newbies don't seem to mess around with complicated preferences until they are to a large part over their newbieness. But then, at least in Mac and Windows, they hit the program's ceiling and aren't able to customize any more. This is what needs to get fixed.

    The solution isn't some language-based agent living in my computer with whom I negotiate an action. No way! My computer should be my bitch and to exactly what I tell her to, and nothing else. Most of the configuring I have to do in Windows involves turning shit OFF that they have on by default. (Including the paperclip!) No thanks, I don't want an OS that's more autonomous than Windows; I want one that's less. I'm not quite a newbie user, but as I taught my mom Win95 she was almost demoralized by how unpredictable everytihng seemed ("Why is the program capitalizing that word for me?"), and how easily she might screw something up. So it's not just me that wants my computer to behave predictably.

  • I used to lock lots of various files to ensure they didn't get changed, like icon holding files since the icon handling system is somehwhat primitive and easy to mess up (select cut instead of copy and your icon is gone). But when I wanted to delete these it refused to delete them because they were locked, so I had to go through and individually unlock each one.

    Extraodinary! On a Mac it is easy to delete locked files, there are a number of ways of doing it, including holding down the option key when emptying the trash. And of course if you cut an icon instead of copying it, you can just paste it right back, or use undo.

    This is a perfect example of how even the easiest to use interface still presents challenges. And people are talking about 3D interfaces and all that. What is really needed is adaptive interfaces; interfaces that recognize what the use is TRYING to do, and adapt themselves effectively to even simple tasks. What we don't need is complexity, rather adaptability.

  • by EvilJohn ( 17821 ) on Saturday July 22, 2000 @08:53AM (#913398) Homepage
    ... and should scale to the level of the user.

    The idea that interfaces should be a one-size- fits-all implmenation is what needs fixing.
  • Check Jacob Nielsen's web site, he is one of the authors of the AntiMac Interface original paper.

    www.useit.com [useit.com]

    He is one of the "masters of usability" and he has been publishing weekly articles for a long time centered on Internet design. Reading them can be a great help whenever you have to publish something.

    Fh

  • by the eric conspiracy ( 20178 ) on Saturday July 22, 2000 @09:00AM (#913405)
    The pro-Mac principles target the wretched masses who are so technically challenged, no?

    NO.

    The purpose of the Mac user interface is to enable the user who is not inclined to spend time needlessly learning technical minutia when he could be using the computer as a tool towards other ends. The point of computers is not computers, but as tools to be used to accomplish a wide variety of tasks.

    It has nothing to do with whether or not you are technically challenged; I have worked with brilliant scientists who have no use for computers other than simple tasks like email and writing papers. It is about whether you see the computer as an end or as a means.


  • ... and its not what I thought it would be. And they DO have some intresting points. I STILL perfer the MacOS GUI tho... and if the reports that OS X will have tcsh installed by default.... nirvana.

    Okay, some elements of aqua suck, but Ars would have us beleive that it's all defined in XML files that root can edit. I know first thing *I*'m gonna do is put all of the window widgets back in the right place. (to hell with Steve's "trafic light" metaphor)

    >massive neural strain induced by too much cofee)

    Heh... does half a box of penguin mints a day, + coffee + tea + Dr. P + some mini thins count? We just made a massive deadling yesterday and everyone was really pushing to get our beta deployed by today. Last nite was the first time in a week I slept more than 4 hours. I feel much better now.

    But yeah, even when I'm calm and well rested, I tend to get too defensive when microdrones go on the rampage and bash anyone who dares not to follow the banner of bill (howz about Linux *ON* a Mac (yellow dog) does that make me TWICE the heathen?).

    I just hope Easel is less of a resource hog than Gnome. I STILL think the Mac team came up with the best interface yet invented, but my main Linux box is still a K6 running SuSE. And it DRAGS under Gnome.

    john
    Resistance is NOT futile!!!

    Haiku:
    I am not a drone.
    Remove the collective if

  • If it only applies to the label and not the icon, how is that doing "all the things mentioned"? Three out of the four suggestions were related to the icon and taking advantage of it's high res graphics.
  • by Kaufmann ( 16976 ) <rnedal@olimpo.co ... r minus language> on Saturday July 22, 2000 @09:02AM (#913409) Homepage
    I've done some research in UIs for expert systems before; I read the original Anti-Mac article in 1996, and have been pointing to it whenever there's a discussion on user interfaces here at Slashdot. And while all in all this is a very good article, I don't believe that the Unix/X way currently embraced by most Free Software projects is the way to an Anti-Mac system, especially not as it is today. Why?

    Well, first of all there's the issue of files. While Unix is almost entirely based on the notion of files, unstructured blobs of data, this paradigm is wholly inappropriate in the network-centric Anti-Mac world. What we want are high-level objects with implicit structure. Text files with XML structure are not good enough.

    You also have to consider the issue of multiple agents. This requires a clear-cut, high-level model of interaction between systems, which, of course, doesn't exist (nor could exist) on Unix. A capabilities model would be more appropriate, but even capabilities by themselves aren't enough; the system should be developed in a high-level language itself, which could be used for remote scripting and automation as well.

    Then there's the issue of the existing Unix UI systems - by that, I mean X. (Yes, I know about Berlin. I hope it'll be good, but I'm not holding my breath.) Although there's native support to network graphics, it's much too low-level and just generally broken.

    Finally, as a lot of people often point out, most of the work in Free Software is still driven by imitation. This is true even (especially?) in GNOME, which is admittedly modelled after Windows' model of low-level, unsafe components with an accordingly brain-damaged user interface. I don't believe that a lot of people working on Free Software have done a lot of UI R&D (with some notable exceptions in the Eazel and Helix teams), so the general tendency for GNU/Linux UIs is (and will be, apparently) that of imitation.

    In conclusion: unlike the author, I don't see that the GNU/Linux world is going the Anti-Mac way, quite the contrary. Perhaps some other, entirely new Free Software project might spring up which would be a better match to Nielsen and Gentner's vision; the Morphic UI system of Squeak is a very good start.

    (Note: I'm not feeling too well, and I'm writing this kind of in a hurry, so I probably left a lot unclear and unexplained. I'll be glad to clarify.)

  • Is there any reason I should trust the
    opinions about "user interfaces" of people
    who don't use paragraph breaks?
  • The scientist you described would, in the context of this discussion, be described as technically challenged

    Not really. He may well understand the workings and uses of a computer far better than you or I. The issue is that he has no desire to engage the computer on that level in order to send an email message.

  • Good grief, all you did was repeat half of what I wrote, but with five times the words. You write as if I were trying to defend CLI's. I'm not. They are an arcane and essentially primitive use of human language abilities. My point was that GUI's aren't any further advanced in the use of the human capacity for visual expression than CLI's are of the human capacity for verbal expression. Their advantage is that they allow for recognition to replace recall, and in a relatively compact way. That's a huge advance in "usability" over a CLI. It's a damn shame it's pretty much the only one.

    Your example shows the power of narrative and recognition (e.g. landmarks like the Wal-Mart), not the power of visualization. An example of the latter would be a simple map. And that's what I want in a UI.

    Why can't I draw a simple map for my computer and say "do this?" Instead, I'm forced to click on this, then this, then this. If I want to do the same thing to a different set of objects, I have to go through the same series of clicks. Do GUI's let me draw a series of steps and then apply them to the objects I choose? No, they force me to script -- in other words, back to the g*d d*mn CLI again!

    That's primitive!

    -Ed
  • by DragonHawk ( 21256 ) on Saturday July 22, 2000 @04:29PM (#913420) Homepage Journal
    Even BeOS, which is as bad as most other OSs in the aspects discussed here, implements this concept in a limited fashion: instead of having a fixed, global, immutable set of file attributes, the Be File System associates each file type with a set of attributes.

    Personally, I think this is better handled as shared user-space glue code which all applications make use of. One of the design goals of Unix (for better or worse, although it seems to work well) is to keep everything in small parts. Rather then building application file attributes into the code file system (where, really, they don't need to be), invent a libFileAttributes (or whatever) API that provides a consistant interface. Kind of like file(1), mailcap(4), and a few other things rolled into one. Indeed, such functionality might be best implemented in terms of these existing tools.

    Projects like GNOME and KDE are heading in this directory, but my only real beef with such efforts is that they are generally being done at too high a level. File attributes shouldn't require a GUI to function.
  • When implemented on a traditional file system, that kind of library-based solution will always end up being inefficient and sub-optimal; it'll feel like a dirty hack to the application programmer; you'll reach a point when it's preferrable to just use a full-fledged DBMS and get it over with. (I speak from personal experience: in my few (and, thankfully, now over) years of Web programming, I went from flat files to the Tie::File Perl module to CSV to MySQL to finally giving up and going to write nice little non-Web applications.)

    Of course, your assumption that "where, really, they don't need to be" isn't necessarily true either; in a lot of applications, the metadata are just as important (or at least, accessed as often) as the associated data. In this case, it would be a really bad idea to impose an additional layer of abstraction between them.

    I found this particularly curious:

    One of the design goals of Unix (for better or worse, although it seems to work well) is to keep everything in small parts.

    Keeping everything in small parts is a Good Thing, yes. But it should apply at a finer grain: instead of having a thousand statically-compiled (from one single big-assed C procedure) programs of a few dozen K talking to each other through a half-assed character stream system (piping? WTF?), it's better to have a million dynamic modules of a few hundred bytes, with well-defined interfaces in a high-level declarative language. And then it ceases to be the Unix philosophy, and becomes the (much better, IMAO) Lisp Machine philosophy! :) See "abstraction inversion".

    (Note that my +1 bonus remains turned off :)
  • by PenguinX ( 18932 ) on Saturday July 22, 2000 @09:04AM (#913429) Homepage
    Is it just me or doesn't MacOSX have a fairly Anti-Mac interface? The extended finder which looks quite a bit like windows explorer or Nautilus. It would seem that even Apple is extending the original framework for the design of GUI's. For instance "Consistency" is important, however MacOSX is not quite as consistant as it's predecessor. The concept of "User Control" is going to be quite as much out of complete scope with OSX because BSD essentially is running in the background. All of the "Anti Mac" interface qualities seem obviously met - so perhaps Apple is taking this paper to heart.

  • Skinning is the future of UI design? Ghod help us!

    Seriously, this goes much farther than mere skinning. Why limit yourself to cosmetic changes in your program?

    Saying that instead of going towards the Anti-Mac interface, we're going towards skinnable apps, is like saying that instead of voice-recognition, the future of PDA interaction is in being able to change the colour of the case!

  • I agree with the article in most respects, but I have a different take on where it's all going. IMHO, the future of UI design isn't in global adherence to a standard dictated by a company. The future of UI design is where applications adhere to the user's own preferences. That seems to be the logical direction of things right now (well, the last few years at least).. See, everyone wants cars, but some people want Ferarris, and some people want Toyotas. We're in the Model T era of UI design. Little black boxes with wagon wheels.

    Yes, but notice that the User Interface of a Toyota is remarkably similar to that of a Ferrari. They both steer using a wheel, they both have the same arrangement of pedals. Why would I want cars that had different UIs? Each time I got in a new car, I would have to learn to drive it all over again. Do you remember how hard it was to learn to drive? That's how computers were before GUIs gave us consistency - for each new application you had to learn to drive it from scratch. Now a lot of what you have to learn transfers from previous applciations, and you just have to learn the bits that are different between apps.

  • by mbpomije ( 131606 ) on Sunday July 23, 2000 @05:22PM (#913436)
    The fact that this incoherent, illiterate, and ignorant rant was marked as insightful is hilarious! All he had to do was imply that GUI users are idiots and the Slashdot crowd ate it up.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...