Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

David Korn Tells All 118

David Korn answers! You asked the questions a little while ago, and now David Korn has kindly responded with answers (some lengthy, some pithy, all appreciated) to queries about the eponymous shell, the famous Microsoft/Korn story (in short -- yep, it's like you've heard), and proper Unix behavior. He even made his son Adam write one of the answers.

Question 1) Comparison
by Shadowlion (shadowlionc@netscape.net)

Background: the only shell I've ever really used is bash. Bash has always seemed to be the standard UNIX shell (or, at least, the standard default UNIX shell), and for the most part I've always been able to do what I wanted in it.

Question: can you engage in a little unadulterated advocacy for a moment to offer some reasons why an informed user might consider using ksh over bash or other popular UNIX shells? What does ksh provide that other shells don't? Similarly, can you give a realistic appraisal of ksh's drawbacks as compared to bash or other shells?

Thanks.

David Korn: First of all, when I talk about ksh here, I am referring to ksh93 which was first released over six years ago. Most UNIX systems ship with ksh88 and most Linux systems ship with pdksh, neither of which has the functionality of ksh93.

There are two different areas of functionality in shells. First is interactive use and the second is scripting. Much of the debate about shells has focused on interactive use only. For example, tcsh is an acceptable shell for interactive use but practically unusable for scripting.

In many cases the argument over which shell is best for interactive use is based upon which key to press for completion. This is a little like arguing that Solaris is better than Windows because of location of the Control and Shift keys or that vi is better than emacs because you you can save a keystroke or two. Most popular shells have similar functionality with respect to interactive use.

It is hard to argue that ksh is any better for interaction, given all the features in tcsh and zsh. But the scripting features in ksh93 are far more advanced than any other shell that I am aware of. For scripting, I feel that ksh is more in the category of perl/tcl/python and I would like to see debates/comparisons for those languages rather than the antiquated bash/csh/etc.

I have not looked at bash for several years and some of the features I describe here might now be implemented by bash. I sent Chet Ramey, author of bash, the list of new features in ksh93 years ago so that if these features get implemented in bash, they would be compatible. Here is a partial list of ksh93 features:

  1. Associative arrays - ksh88 already supports indexed arrays.
  2. Floating point arithmetic + math library functions.
  3. Arithmetic for command similar to C and awk.
  4. Complete ANSI C printf formatting with extensions.
  5. Run time linking of libraries and builtins.
  6. A number of additional substring operations such as offset length.
  7. Full extended regular expression matching capabilities
  8. Compound variables which can be used to represent data aggregates.
  9. Name references for passing aggregate variables by name.
  10. Active variables. Users can trap variable assignment and references by associating intercept functions of the form name.get and name.set.
  11. Ability to make socket connections to servers by name.
  12. read with timeouts.
  13. Conformance to POSIX 1003.2.
  14. Command completion and variable completion, ksh93 only had file name completion.
  15. A key binding mechanism that allows users to bind keys to new editing functions.
Note that only the last two features relate to interactive use. The primary focus of ksh93 is scripting and in this arena it certainly outshines bash. ksh93 runs builtin shell commands much faster than bash.

There is actually a third area of shell functionality which is related to extensibility. In this area ksh93 should be compared to tcl. ksh93 is implemented as a reusable library with a C language API for adding builtins and accessing shell internals. It can be embedded in other programs. For example, dtksh, which is part of the Common Desktop Environment (CDE), uses ksh93 as a library. Similarly, tksh, (written by Jeff Korn) which uses the tk library for graphics, uses ksh93 as a library.

The primary drawback to ksh has been that it was proprietary. This has recently changed however. The new AT&T open source license allows ksh source and binaries to be shipped as part of the system and is now just beginning to start showing up in Linux systems; for example the latest slackware. The source or binary for over a dozen architectures can be downloaded from (http://www.research.att.com/sw/download). Hopefully other systems will start shipping ksh93 and start using this for /bin/sh as well.

Question 2) What about enhancing ksh syntax?
by mirko (mirko@myfamilyname.org)

Ksh is quite cool as it is much more compact than bash ; here are their respective sizes on a Solaris system:

-> /usr/local/bin/bash -version
GNU bash, version 2.02.0(1)-release (sparc-sun-solaris2.6)
Copyright 1998 Free Software Foundation, Inc.

-> ls -la /usr/local/bin/bash
-rwxr-xr-x 1 bin bin 3157516 Jul 14 1998 /usr/local/bin/bash
# ksh -o emacs
# Version M-11/16/88i

# ls -la `which ksh`
-r-xr-xr-x 2 bin bin 186356 Jul 16 1997 /usr/bin/ksh

On a Linux system, these are approximately 300k for bash and 160k for (pd)ksh.

In which direction do you plan to improve it?

Will you rather keep it compact or extend its functionalities regardless the volume increase ?

This issue is quite important for me as, as of yet I am working upon some System-on-a-floppy distribution and the size appears to be critical in this context.

Korn: First of all pdksh is a ksh88 clone; and I might add a better clone than the MKS Korn Shell.

A lot of effort was made to keep ksh88 small. In fact the size you report on Solaris is without stripping the symbol table. The size that I am getting for ksh88i on Solaris is 160K and the size on NetBSD on intel is 135K.

ksh88 was able to compile on machines that only allowed 64K text. There were many compromises to this approach. I gave up on size minimization with ksh93. ksh93 is larger than bash. However, because of the number of commands that can be builtin to ksh93, it is possible to put ksh93 and approximately 30 additional commands on a floppy in approximately the same space as with ksh88 and those commands.

Some of the reasons for the large size of ksh93 are:

  1. Floating point arithmetic.
  2. The use of Sfio for Stdio. Use of the native stdio would not have counted towards the size on most systems because they are part of the shared C library. However, Stdio has too many weaknesses to be used portably and safely by ksh.
  3. Self-documenting code. All the built-ins to ksh93 can generate their own manual page in several formats and locales. This takes up space for the documentation text as well the the processing libraries.
Question 3) pdksh...
by mirko (mirko@myfamilyname.org)

Do you collaborate (or plan to) with the pdksh development team ?

Korn: I don't know the pdksh development team but I would like to thank them for the service they have done in making a version of ksh available while ksh was proprietary. I have noticed remarkable improvements in pdksh in its ability to mimic ksh88 functionality. I don't know what plans the pdksh development team has now that ksh93 is available in open source form, but I certainly would help them try to maintain compatibility if they do continue pdksh distribution. Otherwise, I would hope that they would pick up the ksh93 source and help support and enhance it.

Question 4) ksh today
by Y-Leen

During the design of ksh, were you limited/influenced by computer hardware and consumer market?

Given the chance to completely redesign ksh for today's higher spec machines and the current consumer base, what new features would you include?

Korn: In many ways yes. In early days memory size was a big consideration. [That] sixteen bit machines were the norm influenced the design. The fork() call on most systems caused the data area to be copied in the child process and therefore keeping the data region as small as possible was needed for performance reasons.

Another design principle that was influenced by the hardware and the OS was the desire to only have features that would work on all systems so that korn shell scripts would be portable.

The decision to keep ksh compatible with the Bourne shell was influenced by the consumer base. There are very few Bourne shell scripts that will not run unchanged with ksh.

In a complete redesign, all shell functions would be thread safe so that multiple shell interpreters could run in separate threads.

There are two features that are missing the current ksh93 release that are present in perl and should be added to a future release. One is the ability to handle and process binary data. The second is related to name spaces.

Question 5) True Story?
by travisd (travisd_no_spam@tubas.net)

Was the story about you embarrassing a Microsoftie at a conference true? Specifically, that he was insisting that their implementation of ksh in their unix compatibility kit was true to the "real" thing and trying to argue the point with you. The argument ended when someone else finally stood up and informed the speaker who he was arguing with.

Just curious ...

Korn: This story is true. It was at a USENIX Windows NT conference and Microsoft was presenting their future directions for NT. One of their speakers said that they would release a UNIX integration package for NT that would contain the Korn Shell.

I knew that Microsoft had licensed a number of tools from MKS so I came to the microphone to tell the speaker that this was not the "real" Korn Shell and that MKS was not even compatible with ksh88. I had no intention of embarrassing him and thought that he would explain the compromises that Microsoft had to make in choosing MKS Korn Shell. Instead, he insisted that I was wrong and that Microsoft had indeed chosen a "real" Korn Shell. After a couple of exchanges, I shut up and let him dig himself in deeper. Finally someone in the audience stood up and told him what almost everyone in the audience knew, that I had written the 'real' Korn Shell. I think that this is symbolic about the way the company works.

Question 6) Kind of a shell question...
by update()

There's a lot of squabbling in the Linux world about how the Unix mentality of small apps communicating through standard input/output to form a pipeline should be maintained in the new whiz-bang, GUI environments. Do you think that it can/should be done? What should be the most important considerations for such a messaging system and how should a standard be established?

So Korn (the band) drinks Coors Light? I might have suspected...

Korn: As far as the Korn band is concerned, as you probably already know, it was formed in 1993 to promote the Korn Shell. On the kornshell.com website (click on fun), you can see them endorsing the KornShell Command and Programming Language book, the most complete reference book on ksh93.

There are many people who use UNIX or Linux who IMHO do not understand UNIX. UNIX is not just an operating system, it is a way of doing things, and the shell plays a key role by providing the glue that makes it work. The UNIX methodology relies heavily on reuse of a set of tools rather than on building monolithic applications. Even perl programmers often miss the point, writing the heart and soul of the application as perl script without making use of the UNIX toolkit.

Our aim with ksh, and with about 150 additional tools that are part of the AST Toolkit, (http://www.research.att.com/sw/download), has been to enhance the UNIX tool kit by providing new and improved tools.

Clearly most users prefer GUI interfaces, and GUI interfaces are often hostile to the UNIX methodology. This presents a challenge for the GUI design and unfortunately many GUI applications are built as monoliths with little hope of reuse and make little reuse of the UNIX tool set. I think that the most important consideration when building a system is to separate the GUI from the rest of the system. Anything that can be done with the GUI should be able to be done without the GUI via scripting. This is important if you ever want to automate tasks in the future. Scripting can then be used for automation and for testing.

This tight coupling between GUI and application has other drawbacks as well. It makes it hard to distribute functionality from the client running the GUI to a server that could be somewhere else on the network.

Question 7) What functionality/code in ksh are you least proud of?
by segmond (segmond[at]hotmail)

It is very hard to find a programmer who is completely satisfied with his code. No matter how happy she is with it, there is always that part which she wishes to improve. As far as ksh is concerned what is it that you wish you didn't do, can improve?

Korn: The things that I most regret are not the code itself but mistakes that are in the shell language that I inherited from the Bourne shell code. A lot of effort was made to maintain compatibility while trying to improve ksh at the same time. Compatibility was an extremely important consideration for the adoption of ksh by may organizations. At one point I backed out an improvement because it broke one script in a published UNIX book. If I had my choice, the backquote would disappear as a special character and $() would need to be used instead. Double quoted strings would interpret all of the ANSI-C backslash conventions as well as expand $ expansions. The echo command would not perform any interpretation on its argument. Word splitting would be disabled by default. Note that because I did not make these changes, ksh can safely replace /bin/sh on most systems today unlike zsh which would cause many script to fail in mysterious ways.

There are also a number of features that I introduced into the shell that I wish that I had done differently or not at all. Some obvious examples are the poor design of the fc command, the let command which was superceded with ((...)), and the exporting of attributes for shell variables.

The code has changed a lot over the years which I suspect is an indication that I am never completely satisfied with my code. In fact whenever I look at code I wrote a few years ago, I can't believe how bad it is and I always believe that my new code is finally ok.

What bothers me most about the current code, is that, in spite of its modularity, there are too many places where code in one module only works because of the way another module works. The effect of this is that many changes that I make are not orthogonal and require changes to several parts of the code.

Another weakness of the current code is that it is not completely thread safe.

Question 8) UWIN and etc
by rabtech (russ_slash@boneville.net)

You once said that you had to learn Windows NT because you couldn't criticize what you didn't know. What I'd like to know (as a primarily Windows programmer) is what do you consider to be the best and worst parts of both the Windows NT/2000 model and the UNIX model. What advice can you give? Also, has working on the UWIN project given you any insights that you can share with the rest of the community?

Korn: I have spent the last five years writing a UNIX for windows which is called UWIN (http://www.research.att.com/sw/tools/uwin). My experience is with the WIN32 API for the core OS and not with the WIN32 interface to the GUI. Clearly I had to learn more about the WIN32 API than I think that anyone should need to know. The more I know about the WIN32 API the more I dislike it. It is complex and for the most part poorly designed, inconsistent, and poorly documented.

One of the biggest myths about Windows is that the MS operating systems are compatible, unlike UNIX which has so many variants. In reality, porting from one version of Windows to another is often more difficult than porting from one UNIX or Linux system to another. WIN32 calls on one system are not necessarily implemented on other systems, or even worse, they behave differently, requiring completely different code strategies.

That is not to say that there aren't things that I like about Windows NT. The handle model is good, but would have been better if Microsoft had done it right. The separation of dynamic libraries into interface and implementation also makes sense. Putting the libraries in the same directory as the executables is an improvement over UNIX since the allows executables and libraries to be searched for simultaneously.

The best advice I can give is to not use Windows unless you have to, or if you have to, use a UNIX operating system on top of Windows like UWIN or Cygwin.

One insight that I would give about UWIN is that it proved to be far more difficult that I had anticipated.

Question 9) Ksh Programming For the Web
by Dom2 (dom@happygiraffe.net)

How do you feel that ksh holds up for web programming? I have always enjoyed programming shell scripts more than anything else, but I have always been unhappy with the shell idioms for parsing securely and correctly (the myriad of substitution operators is a nightmare to control). This is one area in which Perl has really taken the lead. How do you think shell programming could be better adapted for the web?

Also, how do you feel about most commercial Unix vendors (and projects like pdksh) that are still shipping ksh88 as the default, 13 years later? ksh93 has many more useful features, but the take up has been slow.

Thanks,
-Dom

Korn: One of my biggest disappointments is the slowness of adoption of ksh93. At the time ksh88 was released, AT&T was a primary UNIX vendor and was able to put it into the standard product. This led to relatively quick adoption for most vendors.

Shell features like here-documents are very well suited for CGI scripts. However, ksh93 is far better as a web scripting language than ksh88. There is not an efficient way to handle the myriad of substitution operators that get passed into a CGI script with ksh88 or any other shell. ksh93 does this rather efficiently with a 'set' discipline which converts the CGI command line into a single compound variable for manipulation by the script. A 'get' discipline converts it back.

The kornshell.com page has the code for a script that can be included in a CGI script in which all the arguments are mapped into shell variables.

Also, recent version of ksh93 have added %H to printf so that strings can be expanded using HTML/XML %xx; form whenever necessary.

The http://www.research.att.com/sw/download website is maintained by ksh scripts in combination with AT&T nmake, and all CGI scripting is done by ksh93.

Question 10) Graphical dtksh, zend, standardizing on ksh93, rpm
by emil

a) Since the code to Motif is now free and open, is there any possibility that dtksh, the ksh93-compliant CDE shell with Motif extensions will be open-sourced? Novel wrote it; will they give it away?

b) Any possibility that you could leverage Zend from php? Or what would it take to implement a dbi-like SQL layer for ksh?

c) Under Solaris and HP-UX, ksh88 is installed in /usr/bin/ksh, ksh93 is installed in /usr/dt/bin/dtksh, but the default shell is the "Posix" shell, a superset of ksh. Is there any hope of getting this mess straightened out?

d) Will I ever expect Red Hat to include an RPM for ksh93? Or would you consider merging with Bash?

Korn: a) I can't say whether Novell will give dtksh away or not. They own the ksh93 extensions and its up to them what they do. I don't think that Motif is either free or open since my understanding is that it cannot be included in commercial products. On the other hand, tksh, which combines ksh93 with tk graphics is freely available. In addition to interpreting shell syntax, tksh can interpret tcl syntax as well.

b) I am not familiar with Zend from php. ksh93 is very extensible. Builtins can be added to ksh and use the same calling convention as tcl uses. There is an API interface to shell variables and other shell functions that can be used with built-in commands. If Zend from php can be added to tcl, then it can be added to ksh93.

c) Since ksh88 is not fully POSIX compliant, some system vendors have modified ksh88 to make it compliant and used that for their POSIX shell. One way to clean up this mess is to get all the vendors to move to ksh93. ksh93 has a single source that compiles on all systems from pc,'s mac's, unix systems, and mainframes. I have no say over what vendors do, but users on these systems certainly can state their preferences.

d) I would hope that Red Hat and other Linux and UNIX vendors would pick up ksh93 and distribute it at least as /bin/ksh. There is no reason why it could not be used as /bin/sh as well and I suspect that it could be merged with bash. We will soon be offering RPMs for the entire AST library from the AT&T website.

Question 11) UWIN and Cygwin
by Lumpish Scholar (psrchisholm@yahoo.com)

How would you compare the UWIN and Cygwin projects?

Korn: I believe that UWIN and Cygwin started at about the same time although I don't believe that either knew about the other for the first year or two. The focus for Cygwin seemed to be primarily GNU tools rather than a more complete integration. The primary focus on UWIN was to make the AST tools described in now the out of print book "Practical Reusable UNIX Software", run without change on Windows NT and Windows 95.

Cygwin has the advantage of being an OpenSource project whereas UWIN is not, although hopefully this will change in the not too distant future. However, UWIN binaries have been freely available for educational and research use, and available for 90 day evaluation for commercial use. As a result there are more than 100,000 UWIN users and this has helped in discovering problems. The source for nearly all of the UWIN commands are available in open source.

Cygwin relies on the gcc compilation system. UWIN tries to be compiler neutral and works with gcc and Visual C/C++. Since it works with the Visual C/C++ compiler, it is easier to integrate UWIN code with native Visual C/C++ modules.

The website http://www.research.att.com/sw/tools/uwin/survey.html allows users to compare UWIN with other alternatives such as Cygwin in for different categories. The feedback that I have received shows that in most instances UWIN receives higher marks than Cygwin, especially in performance.

Question 12) ksh93 as a programming language?
by Lumpish Scholar (psrchisholm@yahoo.com)

What are some of the differences between ksh88 (which I think of as comparable to bash or the Posix shell) and ksh93, that makes the latest KornShell as good or better a language than Perl?

Korn: In question 1 I gave a quick summary of ksh93 features that distinguish it from ksh88 and from other shells. For the most part ksh93 has the functionality of perl 5 and arguably a more readable syntax. In most instances the performance is similar. Scripts that interact heavily with the file system or use pipelines tend to be better suited for shell scripting.

Question 13) So, Dave ...
by An Unnamed Correspondent

Quick question: Vi or Emacs?

Korn: This is an old battle that was raging in the early '80s. It will never be settled. This is why ksh has both edit modes. It partly depends on whether you use the editor as a command or as an environment. I think of an editor as a command that I can quickly start and leave and I think of the shell as an environment which holds state. vi fits this model better and that is what I use.

I use several versions of vi depending on which system I am on. UWIN uses nvi from BSD.

Question 14) Public Apology
by watanabe (petervNOMEATivcfne.org)

Dear Dr. Korn, I feel I owe you an apology, and this seemed like a great chance! I went to school with Adam at Brown. And, in 1993, my roommate, Matt Smith, woke me up at about midnight, excitedly telling me "Do you know the Korn Shell? This guy's dad wrote the Korn shell!!!" He was coming in with Adam from somewhere.

I sort of turned over in my sleep, (I was extremely groggy), and said "I hate the Korn shell." and went back to bed. Adam never really talked to me after that, although he was polite enough to me at parties.

So, Dr. Korn, I feel I owe you an apology. I didn't mean to disrespect your lifework in front of your son! I still can't use the korn shell, but call it fear of the unknown, please, rather than lucid comments on your code.

Peter Vessenes, Brown '97

Korn: Since I could not recall the incident, I asked by son Adam to respond to this. Here is his response:

Peter,

Thanks for clearing things up! In my memory of the incident, we had woken you up that evening and I thought you said to me, "I hate the Korn smell." I was mortified! That was the first time anyone ever (publicly) accused me of having b.o., and your comment gave me quite a complex for the many months that followed. I avoided you at all social gatherings because I was scared to death that you might mention this in front of others. I am relieved to learn that this was all the result of a simple miscommunication.

To be honest, I have no recollection of this incident. I do appreciate your apology, but it is not necessary -- just as people are entitled to their choice of music, art, religion, etc, they too should have the right to pick their favorite shell. I'm just thankful I'm not the son of Bill Gates!

Best,
Adam


Our thanks to David for his responses, and likewise to Adam for his.

This discussion has been archived. No new comments can be posted.

David Korn Tells All

Comments Filter:
  • by Anonymous Coward
    Yeah, but he said something about Emacs maintaining state.

    Heck, I thought Emacs was a state.

  • by Anonymous Coward
    I can see it now. The next generation of sysadmins will all use ksh instead of bash because they all are KoRN fans.

    Oh, the humanity !!!

  • by Anonymous Coward
    My experience has mainly been with the Uwin team at AT&T, and to a lesser extent with the Global Technologies people. But both have been very helpful, and quick to fix bugs. I've mainly been using Uwin to provide a Unix working environment (especially shell scripting), to drive the Microsoft development tools. So I can see how our experiences differ. Programming at the level of shared memory stuff (which tends to cause problems just between different versions of *Unix*), is expecting something close to a miracle from Uwin. To me (having read the early white paper on getting a Unix environment under Windows), Uwin seems to do an almost miraculously good job, considering the difficult starting point. Being on the Uwin mailing list for years has been highly educational too. From that, it seems pretty clear to me that the bulk of the work in Uwin is achieving a complete mixture of workarounds to different sets of problems and bugs in the many different flavours of Windows, and presenting a uniform interface that hides all the ugly workarounds. The kind of thing I'm thinking of is the fact that the Microsoft implementation of popen() under Windows 95 OSr1 caused a floppy disc access! So using pipes in programs I'd written, you'd hear lots of grunting from the floppy drive. Uwin's pipes did not suffer from the same problem. They'd obviously worked around it. Luke Kendall
  • All that object-oriented stuff is yesterday's news, haven't you heard that the latest craze is this new concept called COMPONENT-ORIENTED PROGRAMMING?

    Only one problem: It was invented at AT&T Bell Labs in the early 1970's, and implemented as a research testbed called "Unix".

    Yeah, today's component-oriented programming uses structs and RPC rather than streams. But it's the same darned thing. In fact, I recently architected a commercial tape backup program as a series of what is basically Python scripts being run as remote commands via a specialized RPC server. It made things *MUCH* easier to test, because I could run them locally (on the tape server, without going through the RPC service) as individual commands with some test inputs and outputs, and thus verify correct operation prior to attempting to connect to them from the client.

    If you get a chance, I hid a file called 'TECHFAQ' in the Docs directory on the BRU Professional CD that explains more about the Unix philosophy and how well it applies to The Real World (as vs. to your Microsoft-centric world of threads and sending structs via RPC). I explain the performance disadvantages (minimal), the reliability advantages (extreme), and otherwise talk about The Unix Way (I say I "hid" the file there because until now, I haven't told anybody outside the company that it was there, and it's otherwise buried in a mass of .html files :-).

    BTW, this program was written in Python, and makes extensive use of Python's class system and introspection facilities (especially for the subclasses of the master "dbrec" class, many subclasses are about 10 lines long of dbms record definition and that's it). But that does not invalidate the Unix model, but merely extends it.

    -E

  • I had a similar thought. Bash the standard UNIX shell?????

    And thanks for the link about ed. That was hilarious.

    Geoff
  • I think that it's quite unlikely that anyone with traditionalist Unix skills at a decent level will have a hard time competing in the IT job market with Windows-oriented folks.

    Quite the reverse...

  • by ptomblin ( 1378 ) <ptomblin@xcski.com> on Wednesday February 07, 2001 @08:50AM (#449381) Homepage Journal
    The vi team won because the Emacs team wasn't used to only pushing one button/trigger at a time to issue a command/paintball. If there had been a Control-Alt-Meta-Cokebottle-trigger mode, then the emacs team would have won handily.
  • Yeah. That's why I chose vi as my "editor of choice". Those whimpe emacs guys. I just couldn't stand the thought of using the shell represented by a bunch of loser pantywaists.

    Anyway, "vi" is the heart of "evil".
  • It is extremely important to compile your /bin shells staticly. That way you can recover in case you screw up /lib or ld.so
    --
    Mike Mangino
    Sr. Software Engineer, SubmitOrder.com
  • While for AIX and others the default shell is ksh.
    --
  • If true, that's an interesting little problem,

    It is true. To my delight, David Korn sent me a mail explaining it. In a nutshell, it's deliberate. The original Bourne shell interprets " a | b " as "(a) | (b)". Bash and pdksh do the same. Korn thought this was counter-intuitive (he's right), so ksh reads it as " (a) | b ". It works for all shell builtins, not just "read".

    --
  • I would say that perl would be good for writing customized little utilities that can then be tied together in shell scripts.

    I'm *forever* using Perl for this. Anything where the natural shell approach would be 'cat file | x | y | z', and (say) z is something that is not obviously achievable with an existing UNIX tool.

    Thing is, it usually turns out there's *is* an easy way to to z with a standard UNIX tool: only the other day someone pointed out to me that I'd reimplemented 'uniq -c' in Perl. At the time I was unaware of the -c flag to uniq.
    --
  • by slim ( 1652 ) <johnNO@SPAMhartnup.net> on Wednesday February 07, 2001 @06:25AM (#449388) Homepage
    Shame I missed the call for questions: I'm interested in a quirk of ksh and its clones:

    Consider this code:

    x="hello"
    echo "goodbye" | read x
    echo $x

    In both bash and pdksh, running the script would return "hello", because the "read x" would be run in a subshell with its own environment. The subshell would then close without affecting the instance of x in the main shell.

    In ksh, however, putting 'read' at the end of a pipeline does not seem to run it in a subshell, so running the script would return "goodbye" just as the naive reader of the code might expect.

    Why is this? Is an exception made for the "read" builtin? Is this a deliberate feature that pdksh and bash have failed to clone, or is it happenstance that it came about? I assume David is too busy to be reading, so does anyone else know?

    In our ksh-happy shop, we have a lot of scripts that rely on "echo $string | read x y junk" to parse space-seperated lists, and of course these statements don't port well to Bash (you have to do a few backtick/sed operations).
    --
  • If true, that's an interesting little problem, i'll have to try that when I get to the office. On a par with how does "find . -name * -print" work these days when it you used to have to escape the *. Also "tar xv *" works in some shells these days too.

    Must be progress.

    Regards
  • This would make "tar xv *" very dangerous then as it will work differently depending on which directory you happen to be in. However, I'm pretty sure I can do it safely though I remember in days gone by I couldn't. To be on the safe side I still don't use wildcards when doing a tar extract and still escape wildcards when doing a find.

    My feeling is that shells should expand wildcards regardless.

    regards
  • Csh Programming Considered Harmful [perl.com] (sorry, it's technical, not humerous)
  • a few years ago at (I believe it was) Linuxworld. In an effort to finally settle the age old debate, there was a paintball battle between Team Emacs and Team vi. O'Reilly supplied team shirts, with the respective book covers on the front. There was a fair amount of trash talk on Slashdot before the event. Mind you, these were the early days of Slashdot, and most readers had actually used both.

    The match went to Team vi. It is worth noting that they fielded a larger team. To this day, I feel guilty about not attending that conference -- it's quite obvious that Team Emacs needed me.

    Ah...the earl(ier) days of Linux.

    Lenny
  • I was weaned on Unix systems many years ago and have been assimilated into the Windows world since. Reading this definitely makes me nostalgic for the well-defined old days. One good thing I can say for Windows, however, is that it is finally taking Korn's advice to allow via scripting (e.g. the Windows Scripting Host) most things that can be accomplished via the GUI.

    -- Brian
  • Well, I know of one seriously GUI programming language. National Instruments' LabVIEW. (If you install the trial version on a windoze computer, be careful, because it doesn't uninstall very well, IIRC. I don't know if the linux or solaris ports have trial versions available.) see www.ni.com

    I'm not a big fan of LabVIEW, being more of a text-and-command-line C programming kind of hacker. A couple of my friends are making huge dollars writing labview code for instrumentation and control stuff. One of them told me he likes labv so much because he's a visual person. He's a lot more right-brained than me; He's actually left handed. Labv is great for him.

    However, don't let marketing fool you. Labv is just another programming language. You design the data flow, rather than a set of instructions, but it is not fundamentally different from anything else. It's just a lang with a big library of many high-level functions. (If it wasn't for the library, it would be a nightmare to write anything...)

    Labview has big weaknesses. If you want stuff to happen in a specific order, you have to do a lot of extra work. If you want to read with timeouts and/or handle errors, things get a lot harder.
    #define X(x,y) x##y
  • . . that's only cuz once it's loaded there's no resources left for any other editor! ;->

    ** Martin
  • Dude. those are sun sparc versions. says so right in the top line. The size on linux systems are a completely different story.

    sheesh

    ** Martin
  • freebsd has ksh93 in /usr/ports/shells/ksh93

    all ya gotta do is cd there and type make install clean
  • I would have said that the ksh behaviour is the correct one. "read" is a ksh builtin and should not execute in a subshell unless surrounded by either $() or `` and in either case I would expect very strange results from that.

    Hmm ... I'm looking at this using ksh on a Tru64 Unix box right now .. I'll experiment more on my Linux box when I get home later.

    Macka

  • I see what you mean. I disagree that bash is executing "read" in a subshell, as "read" is also a bash builtin function. However I think that bash is broken here.

    Macka
  • by ch-chuck ( 9622 ) on Wednesday February 07, 2001 @06:20AM (#449401) Homepage
    you read it here folk.

    (blast shileds deployed)
  • Anything where the natural shell approach would be 'cat file | x | y | z',
    Watch out... you've just earned yourself a Useless Use of Cat Award [stonehenge.com].
  • That is called: some stupid moron compiled libreadline statically. Not amazing...
  • Ksh syntax more readable than perl... Yeah right. I have read enough IBM install scripts or semi-insane scripts from commercial solaris software to object. Well written ksh is more readable than horrible perl, but it does not even get closed to proper perl.

    Perl developers not using unix system toolkits. Well... there is a reason for it. Specifically related to temporary files, the system call versus fork/exec and so on. Most ksh programs I have seen (including parts of commercial software) do not take these subtleties into account. As a result they are full with root holes. Perl programmers have learned to take them into account the hard way

    Ksh faster than perl. Seeing is believing. Unless we are talking about arithmetics where perl traditionally does not cut the mustard...

    Ksh more extensible than perl... Well... Once again seing is believing... So far just looking at CPAN the answer is "bulshit"

  • chsh won't work? Sysadmin won't edit /etc/shells? Don't have permissions on /etc/passwd?Why not explore the wonderful world of csh programming on Solaris? Start with the following one easy step!

    1. Add the following to the end of your $HOME/.cshrc:

      if ( -x /usr/local/bin/ksh ) exec /usr/local/bin/ksh
      if ( -x /usr/local/bin/bash ) exec /usr/local/bin/bash

  • Disclaimer: I hope this doesn't become a shell war thread... :-?

    Damn your eyes for that suggestion! You make outrageous inflamatory statements like "csh is OK" and expect to be left off the hook? I call on all followers of the One True Shell (whichever one it is for you) to declare war!

  • Yeah, that jumped out at me. Some people have difficulty with the concept there are other UNIces than Linux(for which bash seems to be really common - after all, it's GNU).

    -lx
  • The kornshell.com page has the code for a script that can be included in a CGI script in which all the arguments are mapped into shell variables.

    Now that just strikes me as a terrible idea. Bad things happening to shell variables is exactly why you should not write CGI's using a shell.

    Really. If you don't yet believe the awesome power of unchecked variables, then read Perl's security docs [perl.com] and learn about Taint mode. Check the FAQ's to see a small example of how often poorly checked input in CGI leads to a compromise. Taint mode specifically is what makes Perl my prefered language for CGI (among my primary languages: C, Perl, Python)
  • You sir do not understand the UNIX Philisophy. You have no business using UNIX to say such things.

    Enlighten me. Explain to me what the Unix philosophy is. I myself was merely paraphrasing the philosophy Korn expressed in his answers to the slashdot questions.

    Better solutions?! I would argue that GUI and console are apples and oranges.

    I am not talking GUI versus console apps. You can write both GUI and console apps in Java. Java has a much better solution to code reuse than the one Korn appears to love. So does Python, or even Perl for that matter.

    I would also agree with Korn that the GUI and shell belong separated at all costs.

    I would argue that if people would make good GUI tools, I would not need a command shell. And actually, for the most part I almost never use one. Though I used to find them very convenient I simply do not have the time or the spare brain cells to waste on remembering the arcana of a 'find' command when I can alias control-F to pop up a nice GUI dialog that has tons of options and will nicely sort the results

    Does this mean I am decades behind the times? That sounds like something Microsoft would say.

    Sure does sound like something microsoft would say. But I didn't say it.

    But I am a younger programmer. I don't know how 'old fart' Mr. Korn is, but it's safe to say that I am at least a few decades younger. Would this be backward evolution by your standards?

    Not sure what I'd call it. Certainly not backward evolution. But you will eventually find yourself unable to compete with those who take advantage of newer technologies

    -josh

  • by joshv ( 13017 ) on Wednesday February 07, 2001 @11:15AM (#449410)
    I understand what he is talking about in terms of building new tools based on resuable 'toolkits' of shell commands, but I think Korn's ideas on code resuability are incredibility out of date.

    The tools he talks about operate on text streams which are piped from tool to tool. There is no structure to the text streams and it is up to every tool to figure out what the stream contains on it's own. Is it HTML? A Text file? /etc/passwd? To top it off, each tools has it's own arcane syntax and combinations of command line switches - a support, training and maintenance nightmare.

    This just does not strike me as a particularly good way of doing things. It was an excellent solution to the problem of resuability back in the days when hardware was slow and memory was limited. But today we have better solutions, and the only reason I could see to use these older tools is for the sake of backward compatibility.

    The idea of building a web site using shell scripts strikes me as patently absurd. Sure it can be done, but why, when there are much better tools for the job - and all of these tools have solved many of the code reuse problems of the past in a much more elegant fashion than the Unix 'toolkit' has.

    His comments on modern GUIs show a certain naivité about the state of the art of GUI design. Both KDE and Gnome have an extensive object models, and their own component and application toolkits which support code reuse and the building of your own personal 'toolkits'. His critique of GUIs might more appropriately be levelled at older Motif applications, where every app truly is an island unto itself.

    It just seems that Korn is living a few decades behind the times. He deserves respect, but his ideas are anachronistic at best.

    -josh
  • ... what is by far the coolest thing in the whole interview:

    All the built-ins to ksh93 can generate their own manual page in several formats and locales.

    That's almost enough to get me to start using it. Thank you, David Korn. It'd be a pleasure if other authors took this idea and ran with it, especially on systems with the more brain-damaged documentation (you know who you are.)

    (jfb)
  • Not that I can answer your question, but I'd like to help narrow down it down a bit. I'd also, merely for curiousity reasons, like to know the answer.

    So, that being said: Are you using ksh93 or an older ksh88-based shell?

    Just from reading the above responses, it appears ksh93 has a much richer scripting environment. You mention that pdksh, which is based on ksh88, processes the statement as bash does. I would assume from this that you're using ksh93, and the difference is most likely due to the extended scripting functionality. That functionality would not be present in pdksh or bash, since they are not based on ksh93.

  • Not a "viitor". Not a "emacsitor". Those aren't even WORDS!!!!

  • Check out "Perl Power Tools: The Unix Reconstruction Project, [perl.com]" where many of the Unix utilities have been written in Perl. In addition to being kinda neat, they make for good mini tutorials for Perl. So far there are implementations for:

    addbib apply asa ar arch awk basename bc cal cat chmod chgrp chown clear cmp colrm comm cp cut dc deroff diff dirname dos2unix du echo ed egrep env expand expr false fgrep file find fold from grep glob head id join kill ln look ls mail make makewhatis man mimedecode mkdir mkfifo mv od par paste patch ping pr printenv printf pwd rev rm rmdir shar sleep sort spell split strings sum tac tail tar tee test time touch tr true tsort tty uname unexpand uniq units unix2dos unpar unshar uuencode uudecode wc what which whois xargs yes

    Cheers,

  • In our ksh-happy shop, we have a lot of scripts that rely on "echo $string | read x y junk" to parse space-seperated lists, and of course these statements don't port well to Bash (you have to do a few backtick/sed operations).

    Actually, you can still do it with shell builtins. Save off the current argument list ($*) if necessary, and then:

    set -- $string
    x=$1; shift
    y=$1; shift
    junk="$*"

    I have to agree, though, that the ksh implementation is better.

  • Does anyone know which OSen (or Linux dists) have ksh93 installed with them? Even if it's installed as (/usr)/bin/ksh93 it'd still be nice to see it get out there and get used, since it sounds like it has some really cool features.

    ---
  • An asumption like this should earn him /bin/ed as a standard shell... :)

    While quite to the point, I would advance this shell as more fitting: c:\windows\progman.exe.

    --
  • Better than c:\windows\explorer.exe, though.

    --
  • Someone forgot to close the BOLD Tag!

    Pope

    Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
  • Hey hey! I already apologized in a sub-post of my question.

    Resigning me to progman.exe for my naivety is just plain cruel!

    :)
    --
  • that particular tasteless joke is in fortunes2-o.

    this begs the question, Why the hell are they including offensive fortunes in the bottom of the page? and yes, "offensive" is the official designation, and they are separate from regular fortunes, requiring an additional parameter on the fortune command.

    (besides, there's much better offensive fortunes than that. just rot13 it and search for the word Radcliffe in a Jabberwocky parody.)

  • Well I guess I must be the first to actually post something serious about the korn shell. Having been forced to program in pdksh because of a lack of actually having access to ksh93 in debian I wonder where one can actually get it?
  • I hate to dilute a humorous comment with dry commentary, but I thought the answer he gave there was one of the most even-handed and fair responses I've ever seen to that question.

    I personally use Emacs most of the time as I like the environment it provides for generating code. However, I also have a good working knowledge of VI because there are a number of times when I want to use an editor as a tool to get a job done quickly.

    That said, I have to mention that if you set up Gnuclient for Emacs, you get to have your cake and eat it too - you can start and dispatch an Emacs editing session just as quickly as VI by attching to an already running Emacs. So for machines that you've customized, Emacs makes for as good a tool as an environment.
  • by watanabe ( 27967 ) on Wednesday February 07, 2001 @06:20AM (#449424)
    Dr. Korn,

    I'm chagrined to see that you and your son are more gracious and funnier than I am. All the best; I think it's clear I need to learn to use your shell.

    - Peter Vessenes

  • Comment removed based on user account deletion
  • by hey! ( 33014 ) on Wednesday February 07, 2001 @08:45AM (#449426) Homepage Journal
    I think the issue is that programming is largely a verbal, logical thing rather than visual. I've yet to see a visual programming environment that really doesn't boil down to a "so what" at the end of the day. IBMs visual age stuff is fairly nice, but mainly because it gets out of the way in a hurry when you really need to get down to brass tacks.

    I think the shell as programming environment is a kind of curious historical accident that worked out really well in conjuction with a broad variety of filter-ish applications. If you think about it, there is little reason why your shell should be your primary glue language for all kinds of applications, other than you happen to be spending a lot of time typing in it if you are on a non-gui system.

    If you relieve the application of having to be your primary mode of interaction with the file system and for launching applications, you can look at various applications as repositories of objects to be used rather than processes to be evoked. Yes, I know you can do more with shell scripting, but this is the sort of task shell scripting is natural for -- to evoke a series of small tools that perform simple kinds of transformations on a file system or data stream. These are highly useful, but aren't the entire universe of scripting applications.

    It's been a few years, but the nicest example of of system scripting I've ever seen is Applescript. In MacOS, applications tell the system about objects and methods that they are willing to expose. They are essentially self documenting. These objects can be manipulated through low level APIs, but they can also be accessed by Applescript via a form of message passing. Applescript was a very small, simple and relatively clean scripting language. It had various sugary constructs that allowed you to make script lines sound like reading English sentences, but they were tastefully done and they weren't obtrusive if you liked things terse. Apple provided a small interactive editor/interpreter environment (vaguely a la WISH or IDLE but less funky) which could inspect the self documenting features of the applications' objects. Scripts could then be saved and run like Unix shell scripts.

    Applescript was sort of like a cleanly done VBA, but it's real strength was that it was a tidy way to coordinate a number of processes whose lifespan were not related in any preconceived way to the lifespan of the script and do itneract with them in a very flexible way. In that sense it was kind of like workign with application COM services. Integration with the Finder gave you the kind of shell script capabilities you'd expect in Unix.

    I've been out of the Mac world now for six or seven years, but they were definitely way ahead of their time. Applescript was a relly valuable technology that was hurt by Apple's failure to deliver OpenDoc and at one point Apple was even going to abandon it.

  • you are just as wrong as them, it is the strong tag. :p

    they have a ...
    how sad that our fearsome ahem ...leaders can't verfiy html code before throwing it on the front page.

  • It probably has less to do with screen re-drawing and more with memory usage; on a loaded server, vi would feel smaller for this reason. Even vim is much smaller than emacs.

    Another pro-vi observation is that it's on virtually every machine out there - out of the box. That's a big plus.

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.

  • In vi editing mode, type ESC Ctrl-V

    In emacs editing mode, type Ctrl-V

  • I would say that perl would be good for writing customized little utilities that can then be tied together in shell scripts.

    You know, any little program that goes:

    (simple filter-type perl program deleted, because SLASHDOT LAMENESS FILTER SUCKS! Oh well.)

    and does funny stuff with the input and sends it to the output. (If you are looking at things from a unix point of view.)
  • Nah, what was meant was that whole state populatoins are required to maintain emacs.
  • Hmm, that sounds handy, I'll try it right away. Hint:
    18:52:38 appel ~$ gnuclient
    bash: gnuclient: command not found
    18:52:40 appel ~$ apt-cache search gnuclient
    18:52:46 appel ~$ emacsclient foe
    Waiting for Emacs...^C
    18:52:50 appel ~$ zgrep gnuclient /root/Contents-powerpc.gz usr/bin/gnuclient editors/gnuserv
    18:52:58 appel ~$ su -c 'apt-get install gnuserv'
    This looks fine: why didn't you tell me before? It seems that I won't need vi for quick stuff anymore! Too bad that (server-start) and (gnuserv-start) bite eachother, we'll probably have to patch gnuclient also to enjoy our 31337 Point 'n Click [lilypond.org].
  • I was at David Korn's talk at NYLUG.org and I believe he said there were a few issues, related to AT&T's lawyers. As I recall, RMS has a problem with the requirement to check the site periodically to make sure the code you previously downloaded hasn't come under patent attack by a third party.
  • No, no, it's "'i' before 'e' except when it's not."

    Works every time.
    --
    Obfuscated e-mail addresses won't stop sadistic 12-year-old ACs.
  • Even though I am mostly a vi person, I sometimes use Emacs--don't let people know. :)

    Actually, I prefer a clone which might suit you as well: Zile [sourceforge.net] It is quite small and quick compared to Emacs.
  • Basically what David Korn says is that all the good shells (or at least popular shells) are similar in terms of interactivity: bash, tcsh, zsh, ksh, and a handfull of others. His selling point is that Ksh has far superior scripting capabilities than the other shells.

    As a shell, I personally prefer zsh because the interactivity is a little more customizable than ksh. As a scripting language, I personally prefer Perl because the scripting is more syntactically flexible. (That's a nice way of saying that Perl lets you write both readable code and slash code.)

    I won't claim that ksh scripts are inherently messier than a dedicated scripting language. Perhaps I've only seen bad examples of ksh scripts, and I've seen my share of bad perl scripts as well. I will claim, however, that there are other scripting languages which are more expressive than ksh and other shells which are more interactive than ksh.

    So I think it's great that ksh scores a 9 out of 10 on interactivity and an 8 out of 10 on scripting, but I can't help but think that Ksh has bridesmaid syndrome-- always second to something else. Sure, you can use your shell to script. You can use your emacs text editor to read news, mail, and play games if you want to. Personally I'd rather have more functionality in each specific area, be it interactivity or scripting.

    -Ted
  • At Motorola, I worked on a project where we wrote a simulator in Perl to interact with a complicated piece of Cell Phone hardware. The hardware ran compiled C code. We once found a bug where the perl simulator sent network packets so fast that the C code crashed from overflow! We actually had to cripple the (perl) simulation to send code downloads slower so the system wouldn't crash.

    Incidently, badly written ksh looks a lot like badly written perl. I've never seen well written ksh, but I know Perl is a far more expressive language, so logically Perl code has more potential to be readable.

    -Ted
  • I once had the displeasure of working with UWIN in a corporate setting. I had just been transfered to a group that produced a tool consisting of 4 posix-compliant C programs multithreaded together using Ksh scripts to glue pieces. No disrespected intended to Ksh itself, the whole code was an utter an complete pile. (As for functionality, the optimal solution involved 1 perl script, no C code or ksh, and it would have taken me a few weeks to do it.) The code took some 2 or 3 years to write, presumably because they used all sorts of great but unnecessary posix features like shared memory maps. Apparently the code really did run fine on a system like Solaris.

    Anyway, everyone who worked on the project left the group in 6 months, but the PHB hadn't figured this out. The product was 2 years overdue for their Solaris to Windows port. The engineers decided to use UWIN rather than rewrite in Windows because they thought it would be less effort. Those engineers had since left. What we discovered working with with the Global Technologies company supporting UWIN was that UWIN didn't properly support most of the strange features our code used. Roughly 95% of the problems were due to Posix incompatabilities in UWIN. I would routinely spend 2 weeks talking with Global Technologies pointing out that the bug really was their fault and why the solution was NOT for us to change our code to match their bug behavior. After they hacked together their changes, they didn't even bother testing if the change left their code Posix compliant, expecting us to test. After that, they'd say that it must be our fault it doesn't work and the cycle would complete, until I could show where else they messed up.

    A few months later I left for a group working on useful things.

    Anyway, UWIN is a great idea, but the people developing it are completely incompetant. (A few times it turned out that I knew more about posix features than they did, and I'm just a standard software engineer.)

    -Ted
  • I made the sig before Slashdot had a shorter max sig length and never bothered changing it. The official quote is:

    "Like the situation at ski resorts of young women looking for husbands and husbands looking for young women, the situation is not as symmetric as it appears."

    -Ted
  • Ksh is quite cool as it is much more compact than bash ; here are their respective sizes on a Solaris system:

    -> /usr/local/bin/bash -version GNU bash, version 2.02.0(1)-release (sparc-sun-solaris2.6) Copyright 1998 Free Software Foundation, Inc.

    -> ls -la /usr/local/bin/bash -rwxr-xr-x 1 bin bin 3157516 Jul 14 1998 /usr/local/bin/bash

    On a Linux system, these are approximately 300k for bash and 160k for (pd)ksh.

    That's *not* 300k. That's 3 meg ;)
    --
    Too stupid to live.
  • I used Korn Shell extensively for programming back in the 80s and early 90s, and that was one of the nicest features - it made it possible to implement things clearly and cleanly that were otherwise very annoying. Parsing various argument lists is a common task, and this is good for it. Most of that was ksh88, but some was earlier ksh versions.
  • Yesssss...

    vi rules!!!
  • Ohmygod! Are you *serious*?
    I just have these Linux computers here, and my Linuxirix SGI and Linuxnetbsd PC on the desk behind me!

    Isn't it funny how Linux's buzzword popularity has pushed other UNIX vendors to offer Linux support on their systems?
  • Try nano sometime. I remember hearing about it first a year or two ago, I believe it was an attempt to rewrite Pico under the GPL...although there are a few nice things in it like a search/replace function, plus it's not tied to pine (which my OpenBSD won't install from ports w/o editing the makefile, something about being unsafe or some kind of bother).
  • It would depend on whether Emacs was brought up in X-window mode or no window mode. It is possible to get Emacs binary distributions both with and without X-windows support.

    If the version of Emacs that you are using has X-window support, then it is possible to turn this off (emacs -nw); this can greatly speed up editing over a slow link as the X packets (e.g. redraws, mouse movements etc) don't have to be sent, just the changes in the text screen. Rohan

  • by Rohan Talip ( 91270 ) on Wednesday February 07, 2001 @12:42PM (#449446) Homepage
    I can't claim to have used ksh much, but that is because I have found tcsh to be the best interactive shell (so far) and generally just use the plain original Bourne shell (sh) or awk or Perl for scripting.

    However here are some of the features I like in tcsh:

    • set prompt = "%B%c2%b `whoami`@%m%$shlvl "
      #Custom prompt with the name of the current directory, user and server: very useful for sys admins of many servers with many roles

    • set who = "%B%n%b has %a %l from %M at %t on %w %D."
      set watch = (0 any any)
      #Watch who is loggin on or off the system and from where

    • set autologout = (120 60)
      #So you don't accidentally leave terminals/connections open

    • set complete = enhance
      #Case insensitive completions, ".-_" as word separators, "-" and "_" considered equivalent

    • set autolist
      #List possibilities on an ambiguous

    • set pushdtohome
      #Make pushd with no args do a "pushd ~" (like cd does)

    • set cdpath=(. .. ../.. ~ftp/pub/downloads/{ftp,http} /somedir) #Iust type "cd www.kernel.org" from anywhere and voila, "pwd" shows /home/ftp/pub/downloads/http/www.kernel.org

    • set listjobs
      #List all jobs when suspending completion

    • set printexitvalue
      #Print non-zero exit values upon program completion

    • set ignoreeof
      #Don't kill shell when ^D seen

    • set noclobber
      #Don't overwrite an existing file when using ">"

    • set rmstar
      #Prompt the user before execution of "rm *" !! :-)

    • alias cd "cd -v"
      alias precmd /bin/echo ""
      alias + "pushd -v"
      alias - "popd -v"
      alias = "dirs -v"
      #Some useful aliases to show the directory stack when moving around, and to insert a blank line before prompts

    • complete cd 'p/1/d/'
      complete rmdir 'p/1/d/'
      complete set 'p/1/s/'
      complete setenv 'p/1/e/'
      complete unset 'p/1/s/'
      complete unsetenv 'p/1/e/'
      #Completions on aliases, shell variables, environment variables, directories, etc.

    • History searching and substitution

    • Redirection of stderr (cmd |& tee output), although I sometimes prefer the Bourne way of being able to select stderr independently
    I hope someone finds this useful, because I love tcsh and even though I am quite capable when using sh, bash or ksh, I usually feel so hamstrung that I install tcsh PDQ if it hasn't been already!

  • The true answer to "vi or emacs" is: BOTH.

    I start up emacs once a week on monday morning. It runs all week, firing up new windows as required.

    I run vi hundreds of times a day, for all the little things.

    Emacs is good at some things, vi for others. Use a tool for what it's good at---isn't that a direct corollary of the Unix Philosophy?

  • Yeah, but the comments on scripting and testing are not wide of the mark - they are bang on. I may want to automate my output, and run things automatically. Ensuring that the GUI is just one possible way of controlling a program (a command line might be another) is just good sense, and practice. It does sit very well with the MVC (model view controller) idea.

    It means that I can run tests scripts automatically to ensure that the internal functioning of a program is correct.

    I can also automate data processing (just imagine taking those hundreds of simulation runs, doing the data analysis and plotting the results using a point and click interface). At least with a shell script, or perl, or ruby (my fav) you can automate this process - link the simulation tools to the analysis tools to the plotting tools. Even MS does this with VB (although it hust isn't as pretty!).

    regards

    tree_frog
  • I cut and pasted your code, and got 'hello'. I'm on Solaris 2.6. It's not clear what ksh we have, but I'm pretty sure it's not ksh93.

    So maybe this trick isn't so safe? Perhaps someday you'll upgrade something and your scripts will stop working.
  • I've used and installed/managed SunOS, Solaris, OpenBSD, Linux and used (not managed) QNX, FreeBSD and NetBSD. Only on Linux I've seen bash as a standard shell. csh is OK but there is tcsh which is as good as bash IMO. Yes it does have filename completion and history. I use both.. :)

    Disclaimer: I hope this doesn't become a shell war thread... :-?

    Cheers...
    --
    "No se rinde el gallo rojo, sólo cuando ya está muerto."

  • by CptnHarlock ( 136449 ) on Wednesday February 07, 2001 @06:15AM (#449451) Homepage

    Background: the only shell I've ever really used is bash. Bash has always seemed to be the standard UNIX shell (or, at least, the standard default UNIX shell)...
    An asumption like this should earn him /bin/ed [hmc.edu] as a standard shell... :)

    Cheers...
    --
    "No se rinde el gallo rojo, sólo cuando ya está muerto."

  • Hmmm... I'd like to see a shell that provides a command-line, programmatic interface to some of the Linux desktop librarys (GNOME, KDE) in much the same way that ksh and its ilk provide the same to the basic POSIX APIs of a baseline UNIX system.

    Eg, you could write a script to go searching through a gnome-vfs 'filesystem' or such...

    Or am I on crack?
  • by The Pim ( 140414 ) on Wednesday February 07, 2001 @07:42AM (#449453)
    This is a Unix FAQ [mit.edu], ksh FAQ [kornshell.com], and a Bash FAQ [mit.edu]. As a bonus, I found this pertinent discussion [netbsd.org] in the NetBSD bug database.
  • Remember: "i" before "e" except after "c" except for the word "weird."

    No, it's, "'i' before 'e' except after 'c' or when pronounced 'a' as in neighbor and weigh... oh, yeah, and that weird weigh too." But some people just aren't wierd... er, wired for spelling.

  • For the first time ever that I use these three letters, its actually literaly true...

    LOL

    Thank you

  • Read the article again:

    is now just beginning to start showing up in Linux systems; for example the latest slackware. The source or binary for over a dozen architectures can be downloaded from (http://www.research.att.com/sw/download).

    Install it yourself....

    And what is up with this quote (from the bottom of slashdot - where do you guys get these????)

    Q: What do you say to a Puerto Rican in a three-piece suit? A: Will the defendant please rise?
  • Solaris 8 includes both bash and tcsh -- out of the box. No witty sig this time :-(
  • Perl developers not using unix system toolkits. Well... there is a reason for it. Specifically related to temporary files, the system call versus fork/exec and so on. Most ksh programs I have seen (including parts of commercial software) do not take these subtleties into account. As a result they are full with root holes. Perl programmers have learned to take them into account the hard way

    There is, of course, more to it than that. The security is actually your least worry, IMO. You can screen arguments passed to programs. For instance, you can defeat buffer overflows being passed through your script by simply checking argument lengths. Of course, that involves some additional overhead.

    No, the reason that perl programmers (or scripters or whatever you want to call them) tend to avoid calling external programs, or call them through modules, is twofold. First of all, those programs, while fairly small, still may do much more than you want to do. If I want to print a file to the screen, I don't use cat; It has code for handling pipes and whatnot. I open() the file, and while I print;. Then I close it. This way, I don't have to deal with invoking all the excess nonsense, starting up an external program, and so on. Just because there's an external program there doesn't mean I want to deal with it.

    The other issue, of course, is that I don't care to deal with interpreting the output of another command. Mr. Korn, it seems (though I know this isn't true) would have us calling external programs to get an IP address instead of calling gethostbyname(). I'm sure there's a better example than that, but hey, I'm in a rush. The point is, it's easier to reinvent the wheel sometimes, and your code executes faster, especially in low memory (or slow I/O) situations.

    Ksh faster than perl. Seeing is believing. Unless we are talking about arithmetics where perl traditionally does not cut the mustard...

    What I want to know is, "Faster how?" Startup? Total execution? File handling? Mathematics?

    Ksh more extensible than perl... Well... Once again seing is believing... So far just looking at CPAN the answer is "bulshit"

    I don't think ksh is more extensible than perl, either, but not because there are more extensions currently available, but because I don't think one is more extensible than the other. How could you say, for example, "Java is more extensible than C++"? They're both languages, you can write your own code for both, and both can tie into external programs. Hence, neither one is really more extensible than the other, and it's simply a matter of what people are willing to bring to the language.

    This is the same way. Personally, I don't see ksh (ksh93 or otherwise) ever being as popular as perl. Perl is just so easy, there are so many books on it, there are so many modules, et cetera. I haven't touched ksh93 (nor do I intend to, more on that later) but I can't see it being easier to write than perl. Also, shell script, when it reaches a certain level of complexity, tends to look like ass. I don't know about ksh93, but since so many things were done for bourne shell compatibility, I suspect it's the same.


    --
    ALL YOUR KARMA ARE BELONG TO US

  • Just my $0.02 on how wildcard substitution works in shells...

    In my experience, if you're in a non-empty directory and type "echo *" (no quotes), the output will be a space-separated list of all the files in the current directory (no dotfiles though). However, if you try that in an empty directory (or, more accurately, in a directory that only has dotfiles in it), remarkably enough, the output is a single asterisk.

    It seems that the shell will try to substitute files for the * wherever it can, but if it can't, it will pass the * straight through to the command underneath. This also works with other wildcards like the ? or the [] wildcards. FWIW, all these experiments were run under ksh.

    -VoR
  • Yes, but nobody asked him about Brother Korn, Japanese rapper and frequent Iron Chef judge. I'm personally disappointed.


    ----
    http://www.msgeek.org/ -- Because you can't keep a geek grrl down!

  • This is true and is a *very* good thing. I mean hell you can fire vi on a Debian box even before the install is done.
  • by SquadBoy ( 167263 ) on Wednesday February 07, 2001 @06:43AM (#449462) Homepage Journal
    in the vi vs. Emacs deal. I have to agree. As a sysadmin type I also make the choice based on a couple of other factors. Over a slow SSH connection vi is faster and feels more responsive.The same goes for editing a couple of lines in a config file. But if I have to edit something long or more then a few lines I find Emacs to better suit my style of working (this is somewhat because I learned a evil editor in my youth and did not see how good vi was untill the last year or so.)
  • Wrong.

    The default shell in both Solaris 2.6 and 2.7 is /sbin/sh for root and /bin/sh for other users.

    Maybe your site used csh, but Solaris gives you sh out of the box.
  • That advocacy for something as silly as text shells is an ugly thing.

    Let's go New Jersey Devils! :)

  • There was a fair amount of trash talk on Slashdot before the event. Mind you, these were the early days of Slashdot, and most readers had actually used both [vi and emacs].

    I can see it now: The big trash-talk flamewars of the future will be gedit vs. kedit.

    Bingo Foo

    ---

  • Watch out... you've just earned yourself a Useless Use of Cat Award.

    I've heard first-hand accounts of people trashing Very Important Files because they accidentally used a '>' instead of a '<'. Piping from cat, while gratuitous, greatly reduces the chance of error. Personally, I'm not gonna lose sleep over the wasted CPU cycles.

  • by Erasmus Darwin ( 183180 ) on Wednesday February 07, 2001 @07:26AM (#449467)
    Even perl programmers often miss the point, writing the heart and soul of the application as perl script without making use of the UNIX toolkit.

    I'm not quite sure what to make of his quote above. On one hand, I freely admit that I tend to think in terms of globbing various UNIX commands together to produce some pretty decent results with the use of just a few pipe symbols as glue.

    On the other hand, I feel that doing so in anything intended to be portable and/or robust is just asking for grief. When I stick to utilizing constructs that exist only within perl, I know I'm pretty safe. As soon as I start integrating external commands, I have to worry about checking return values, checking that the command even exists, escaping shell metacharacters, portability issues, and so forth. Some times, it just becomes simpler to "reinvent the wheel", so that I've got a nice, predictable function to deal with.

    That being said, for quick hacks that are't mission critical, external UNIX commands are quite definitely the way to go. However, there often is a valid reason not to go that route.

  • by update() ( 217397 ) on Wednesday February 07, 2001 @07:31AM (#449470) Homepage
    So Korn (the band) drinks Coors Light? I might have suspected...

    For those of you wondering where that apparently idiotic non-sequitur came from, it refers to this picture [att.com] on David Korn's web site. Roblimo, I appreciate your including my question but you could have edited out that line, especially if you're stripping out the HTML. ;-)

  • It is not an acceptable license due to these statements from the faq:
    ---
    5.Can I modify the source and distribute it to someone else?

    Yes, but you need to package the modifications separately, usually as a patch file.

    6.Why must the original source be kept separate?

    We want everyone who receives the source to know what we have done vs. what changes have been done by others that we have not yet approved. This is primarily an issue of quality assurance.
    ---

    Put bluntly, only paranoid control freaks put these type of clauses in their licenses (which is probably AT&T's fault). Nobody ever wants to work with such software because of the hoops you have to jump through to improve and maintain it.
  • by mojo-raisin ( 223411 ) on Wednesday February 07, 2001 @06:43AM (#449474)
    Once you become aware of the truth that emacs is an operating system and that lisp commands are your "unix tools" you will see the true light.

    Emacs makes all other editors worthless.
  • What he actually said is that vi is better suited to his style of computer use and therefore what he chooses to use.

    Posting your conclusion as a quote is the same type of evil as saying "the Wall Street Journal said George W's Social Security plan does not add up" without including the information that they also said it delays the collapse of Social Security by only a few years compared to Al Gore's, and the alternative savings plan could not be counted on to take up all the slack.

  • Actually, that's what /sbin is for. Solaris uses /sbin/sh as root's shell and for executing init scripts. (init even goes so far as to ignore whatever #! shell you stick on the first line and uses /sbin/sh) The 's' in sbin stands for static and not 'secure', like most people seem to think. It's where you put your critical statically-linked binaries so that you can use them (like the parent poster noted) if you lose your ability to link/access shared libraries. I don't know how many times I've seen binaries get put in [/usr[/local]]/sbin because they were suid/sgid root or were daemons that attached to priveliged ports or the like.
  • Probably crack. But I bet it would be doable, though I doubt it would be like a normal shell. It would be nice to have...
  • There is, in a vague sort of way. There was some incedent a couple of years ago where they somehow met (David Korn and KoRn) and laughed it up.
    There are some wierd pictures of the pudgy nerd hanging out with the dreadlocked wierdos. Look at this [kornshell.com] for pictures of the band and david and wierdo fans.

    Hope that helps,
    Brant
    Brant

A Fortran compiler is the hobgoblin of little minis.

Working...