Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Interview with Knuth: TeX, MMIX/Crusoe 104

Pretender writes, "Donald Knuth is interviewed here, and the questions have more depth than you usually see in his interviews. Gets into a little nitty-gritty about TeX, fonts, and Crusoe's possibilities with MMIX emulation. The cartoon at the bottom is hysterical. "
This discussion has been archived. No new comments can be posted.

Interview with Knuth: TeX, MMIX/Crusoe

Comments Filter:
  • by Anonymous Coward

    This week, Advogato had the pleasure and honor of interviewing Prof. Donald E. Knuth. He is the author of the TeX typesetting
    system as well as The Art of Computer Programming and a number of deep, insightful papers and books. The interview took place
    by phone on a rainy California winter day. The topics covered the freeness of TeX and its fonts, how TeX's innovations have slowly
    diffused into commercial systems, some history of math typesetting, the design of TeX from the beginning as an archival system,
    literate programming in the age of the Web, MMIX and the Transmeta chip, how to avoid generating inscrutable error messages,
    and taking the TeX ideas to a broader community. Read below to find more about a remarkable person.

    Advogato: The first questions that I have are about free software. TeX was one of the first big projects that was released
    as free software and had a major impact. These days, of course, it's a big deal. But I think when TeX came out it was just
    something you did, right?

    Prof. Knuth: I saw that the whole business of typesetting was being held back by proprietary interests, and I didn't need any claim to
    fame. I had already been successful with my books and so I didn't have to stake it all on anything. So it didn't matter to me whether or
    not whether I got anything financial out of it.

    I see.

    There were people who saw that there was a need for such software, but each one thought that they were going to lock everyone into
    their system. And pretty much there would be no progress. They wouldn't explain to people what they were doing. They would have
    people using their thing; they couldn't switch to another, and they couldn't get another person to do the typesetting for them. The fonts
    would be only available for one, and so on.

    But I was thinking about FORTRAN actually, the situation in programming in the '50s, when IBM didn't make FORTRAN an IBM-only
    thing. So it became a lingua franca. It was implemented on all different machines. And I figured this was such a new subject that
    whatever I came up with probably wouldn't be the best possible solution. It would be more like FORTRAN, which was the first fairly
    good solution [chuckle]. But it would be better if it was available to everybody than if there were all kinds of things that people were
    keeping only on one machine.

    So that was part of the thinking. But partly that if I hadn't already been successful with my books, and this was my big thing, I probably
    would not have said, "well, let's give it away." But since I was doing it really for the love it and I didn't have a stake in it where I needed
    it, I was much more concerned with the idea that it should be usable by everybody. It's partly also that I come out of traditional
    mathematics where we prove things, but we don't charge people for using what we prove.

    So this idea of getting paid for something over and over again, well, in books that seems to happen. You write a book and then the
    more copies you sell the more you get, even though you only have to write the book once. And software was a little bit like that.

    I think that's the model that software publishing generally comes from. There was a quote that you had in the
    "Mathematical Typography" essay reprinted in "Digital Typography" where you said, "Mathematics belongs to God."

    Yes. When you have something expressed mathematically, I don't see how you can claim... In the context, that was about fonts. That
    was when I had defined the shape of the letter in terms of numbers. And once I've done that, I don't know how you're going to keep
    those numbers a secret...

    Proprietary.

    I can conceieve of a number that would be a million digits long and would be extremely expensive to compute, and once somebody
    knew that number, it would solve all kinds of problems. And I suppose that would make it a little bit harder to say that God already
    had given us this number, when it's a number that you can only discover by a tremendous amount of sweat.

    When I made that quote, I didn't think of such things.

    Fonts seem like a really interesting edge case for that argument, because a font is in some ways a mathematical formula,
    especially a TeX font, much more so than what came before, but it's also an artwork.

    Absolutely. It absolutely requires great artistry. So the other part of this is that artists are traditionally not paid like scientists.
    Scientists are supported by the National Science Foundation to discover science, which benefits the human race. Artists, or font
    designers, are not supported by the National Font Foundation to develop fonts that are going to be beneficial to the human race.
    Fonts are beneficial to the human race, they just don't traditionally get supported that way. I don't know why. They're both important
    aspects of our life. It's just that one part has traditionally gotten funded by a royalty type mechanism and the other by public welfare
    grants for the whole country.

    Perhaps that has something to do with the absolute necessity in science to have open access to the results of others,
    that if you did science in a closed, proprietary framework that the disadvantages would be so clear.

    With fonts, it was pretty clear to me.

    Ok! That's a question that Federico Mena Quintero suggested. You've gotten a number of free fonts contributed by
    artists, in some cases very beautiful fonts, to TeX and to the Metafont project. In general, this has been a real struggle for
    open source development these days, to get free fonts. Do have any thoughts?

    I think it's still part of this idea of how are the font designers going to get compensated for what they do. If they were like a scientist,
    then they've got their salary for doing their science. But as font designers, where do they get their salary? And musicians. It's just a
    matter of tradition as to how these people are getting paid.

    But how did you address those problems with the fonts that got contributed to TeX?

    In my case, I hired research associates and they put their fonts out into the open. Or else, other people learned it and they did it for
    the love of it. Some of the excellent fonts came about because they were for Armenian and Ethiopian and so on, where there wasn't
    that much money. It was either them taking time and making the fonts or else their favorite language would be forever backwards, so
    I made tools by which they could do this. But in every case, the people who did it weren't relying on this for their income.

    If we had somebody who would commission fonts and pay the font designer, the font designer wouldn't be upset at all about having it
    open, as long as the font designer gets some support.

    And you did some of that.

    Yeah. In fact, I worked with some of the absolute best type designers, and they were thrilled by the idea that they could tell what they
    knew to students and have it published and everything. They weren't interested in closed stuff. They're interested in controlling the
    quality, that somebody isn't going to spoil it, but we could assure them of that.

    Right. Working with the creator of the software.

    Yeah, if they didn't like the software, I could fix it for them.
  • Very technically I have been programming for at least 3 years now. I spent 2 years learning that dieing language called pascal (ever seen any apps w/source that used pascal as a major effort). I have been learning C++ for about a year. Genuinely I wish to practice "The Art" as you call it.

    However I cannot easily gain the equivelent of a Phd to read just one book (that is essentially unacceptable).

    Well, these aren't books on software engineering or (IMO) computer programming. They are books on algorithms. Both are computer science, but experts on algorithms and experts on software engineering are not identical sets. Sometimes a more sophisticated algorithm can save your butt as a programmer, so you should know what's out there. But Knuth's books are very detailed and broad reference works. This is beyond the level of what the average programmer needs to know most of the time, but you should know enough to be able to ask the right questions of the right people to get the algorithms you might need. Any decent computer science curriculum at a major university should provide you with this level of knowledge. So, to all those who are still students, my advice is to pay attention in your classes on algorithms and don't dismiss them just because you won't end up doing much programming. If you don't you'll really limit the sort of software projects you can work on.

    Also, if your primary interest is in mathematics and algorithms, these are beautiful and fascinating books.

  • by Anonymous Coward

    I think Knuth's response says it all about why SGML was never as popular as XML is now:
    Prof. Knuth: I saw that the whole business of typesetting was being held back by proprietary interests, and I didn't need any claim to fame. I had already been successful with my books and so I didn't have to stake it all on anything. So it didn't matter to me whether or not whether I got anything financial out of it.

    The SGML world was one of big publishers who had millions to spend, and of sofware development companies who had millions to earn. It's funny how companies like IBM, where the generalized markup language was developed couldn't see or develop the concept of SGML/XML for data. Data and not documents is the area where XML is of major prominence these days. Last year I heard Jon Bosak, the XML Daddy, say that the data side was the easy part and documents the hard part. I still haven't seen much progress on the document-side of XML, so I suppose that's why TEX continues to live on!

  • This site can't be slashdotted already! Aw man...

    I need to get a copy of the Art of Computer Programming sometime. Then I can stop skimming it in Barnes & Noble. Better that than Knuth's Big Dummies Guide to Visual Basic. (I only got it for the TrueType fonts...) ;)

    Typesetting programs are a Unix tradition. (TeX -- hey, at least it's not roff.) But it's really interesting to hear about the internals from Knuth. I'm pretty impressed, that man makes anything sound interesting.

    I'm amused that Knuth had features in TeX that Adobe couldn't implement without just using the same algorithm. He's just the algorithm man... I guess people still write books in TeX because it works well, not just because they're really old. ;)

    Oh well, I really wanted to read the rest of that, but it's slashdotted now. Mirror? Someone? Please?
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • One day the mouse stopped working on the NT Server at work. So I went into the devices control panel, selected mouse class driver, and clicked on (er, typed the keyboard shortcut equivalent to clicking on) the Start button, and got the error message 'The service could not be started because the specified file could not be found.'

    Er, which file, exactly, is that?

    Almost as clear as my personal favorite, 'A device attached to the system is not functioning.'
  • [ As an added bonus, I have TeX input files that I wrote in 1988 that still compile today. That's older than some /. readers! Word 97 can't even read Word 6.0 files. ]

    Neal Stephenson talks about this very problem in his In the Beginning Was the Command Line essay on www.cryptonomicon.com. He and I made the same choice - I went through all of the fiction I wrote in Word, along with my journal and other stuff, and saved every last bit of it as ASCII text.

    Unfortunately, file format rot will probably always be with us. I think that most developers think of file formats as intellectual property - if another program can read their files flawlessly, it's much easier for someone to switch to that other program, hence the user is locked in.

    I just discovered PageMaker's tagged text function. It's basically PageMaker's own markup language - you can import and export ASCII text with its formatting codes embedded. And it is *nice* - I love being able to automatically format huge chunks of text using Perl scripts. So I keep telling myself I'm going to learn how to use TeX - I'm beginning to understand why people use something non-WYSIWIG.

    Off-topic a bit, but - just to bash Word, Word can't even retain formatting moving from one computer to another. I'm the desktop publisher at a print shop, and Word files have to be proofed much more carefully than anything else (except for Publisher files). Text gets shifted from page to page, margins change - just selecting a different printer on the same computer causes text to reflow.
  • Gee, looking at more of the comics in the Doctor Fun link on metalab makes me wonder why I haven't seen this guy before. Some of them are REALLY funny.

    http://metalab.unc.edu/Dave/Dr-Fun/ [unc.edu]
  • well, while we wait for the advogato.org server to recover from the onslaught - speaking of inscrutable error messages - t'other day I tried to move an email into one of M$'s 'Outlook' folders and could only get something like "Operation Did Not Complete" or so - scratch head - hmmmm - go to folder and noticed there are 16,383 items - Hmmmm! - were I a layperson that would look like an purely random number and have to call for support, but being a computer genius I tink: that's 2^14-1 ! The ol' M$ 13 bit limit w/ inscrutable error message trick. Why can't they say something sensible like, "This folder is full. Please move or delete some items"? I guess it was probably nixed by their superiors in the Mrktng dept.
  • Just so you know AbiWord [abisource.com] has a LaTeX export feature.
  • I thought LP is where the code is written in
    documentation order, together with ample documentation prose.
    A preprocessor formats the code as either a word processing document or as ordered code.

    For example, the critical action routine might be explained first, then followed by the overall work flow, then followed by the supporting memory usage and function interfaces. This may be opposite to code order.

    This way the documentation stays current with the code.

    Sometimes you find holes in your coding when trying to explain it in prose.

    Current practice is scattered comments throughout code; secondary word processing documents that go out of sync with the code.

    I don't think very many commercial outfits practice LP.
  • Here's another one.

    http://www.shopus.com/~mray/28.html [shopus.com]

    Enjoy.
  • Interestingly, this "budding theory" predates any computer language. Before a language may be conceived, we have to understand things like Turing Completeness, "regular languages", context/context free, lexicons, protocol, computational complexity, space/time constraints, discrete computation theory, etc.

    You're right, of course. My grad school teachers would smack me for making such an incomplete remark. :)

    Unlike engineering, computer science was _born_ in theory.

    I wouldn't say that. Engineering is tightly coupled with science, and all engineering is an extension of science (theory). Name any engineering discipline, and I'll tell you the sciences that predated it.

    In the past it has not been "See, here's how you program in C/C++/Java. And this is what a compiler and OS do. And here's 4 other things. Now go practice coding. Soon you'll be a scientist/engineer." That is, unfortunately, our current trend.

    Again, you're right. My frustration carries over with all these jobs being labeled Computer Science or Software Engineering, when the majority of it is neither... it's hacking. That's not bad at all, it's just not what is described. And I think it serves to confuse people who don't understand the CS field, when Software Engineer, Computer Scientist, and Hacker are all used interchangably, when they really have different meanings.

    In the past we have seen great teachers such as Turing, Dykstra (sp), and Knuth. If anything, we should _return_ to the theoretical roots of computer science to provide understanding for our current practice.

    I agree. I think that as the field progresses (and programming becomes even more mainstream), the three titles I list above will play themselves out. Everyone will understand the difference between CS, SE, and haX0r. :)
  • Come on, there is lots of readable and understandable stuff in there which doesn't require a degree in CS. My background is in Business Administration and I was able to grasp a fair share of it, though significant portions are beyond me.
  • "The server returned an extended message..."

    (I'll show you a fsking extended message!!!)
  • I love Knuth's line, "Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the
    bottom of things." I should make it my .sig file... Oh, wait; I think I see a potential problem here.
  • >>Name any engineering discipline, and I'll tell you the sciences that predated it.

    Roman viaducts and aqueducts? I'm not sure how good the ancient Romans were at science.

    Between theory and practice, sometimes theory leads, usually practice leads, but there is a thin line between them which cannot be broken.

    Laplace transforms have been used by engineers for a long time. Adequate mathematical theory of such _may_ exist now.

    From Chapter 3, Random Numbers:
    "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.
    -- John Von Neumann (1951)
    "Round numbers are always false."
    -- Samuel Johnson (c. 1750)
  • It all goes back to the French Revolution and a meeting held on a tennis court on 17 June 1789. In defiance of the king, the deputies of the third estate met in an indoor tennis court. The group was divided over just how revolutionary or defiant they should all agree to be, and the ideological division begot a physical one as the two factions lined up on different sides of the court (right and left). Eventually they swore an oath (The Tennis Court Oath) to draw up a consititution.

    That's why politicians speak of The Right and The Left.

  • political liberals are described as being "on the left" and conservatives "on the right"

    I heard that was because in the House or Senate seating was arranged so that the more conservative party sat on the right side while the more liberal party sat on the left.

  • has anyone ever taken a class from him?

    Yes, but not me. I took a graduate class at Duke University from someone [duke.edu] who had been Knuth's graduate student at Stanford.


  • How's that for a sweet dream? For instance, when you develop major web sites, the customers more often than not, want to download a generated document which they can print out and show to whoever. HTML won't work (eg. page breaks), pure text won't either as it cannot be given a neat layout. Alternative? Generate some PS (hmm.. I need a link on how to do that) - too complicated, RTF? Ha! It's invented by MS and various companies add their own codes as they please. I'd love to see .tex support in editors. All they need is to be able to show .dvi or .ps, the compiling/rendering can be done serverside. But it would be really sweet with support for TeX as is, instead of darned RTF. TeX is reliable, usable and highly configurable. The problem with TeX support is that people cannot compile it unless they have the same .sty, modules etc. as the original author did. Surely a problem that can be overcome. I think I'll post this on AbiWord's wishlist, if there is a such.
    Have a nice weekend :)

  • Quit yer bitching. You've never seen hardcore math. Are you taking "proofs" calculus or "problems" calculus?

    The fact is, CS majors don't get much math at most universities. If you want to see math (in engineering) go to the mechanical engineering or EE departments. Then complain about your HS calculus. :-)

    Ryan
  • Before a language may be conceived, we have to understand things like Turing Completeness, "regular languages", context/context free, lexicons, protocol, computational complexity, space/time constraints, discrete computation theory, etc. What???????? This must be a troll, surely? Before a language may be conceived? That's a bit like saying "Before a child can understand English, you have to teach them grammar" - and the problems with that hypothesis are all too obvious.

    Well, anyway, I learnt to program without the benefit of theory. Sure, I did junky spaghetti code at first (what do you expect with a language like Mallard BASIC whose designers thought that GOSUB was the epitome of structured programming! ;) but by now I've matured and I can engineer reasonably efficient object-oriented solutions. And all around me I see other CS students fed theory (it's quite a heavily theoretical course, unlike many others in the UK) which they have great difficultly understanding, and thus they write large programs consisting of one main method (!!!) with a great deal of duplication.

    The trouble is, theory by itself doesn't help - unless you've got a genius on your hands, in which case it probably doesn't matter how you teach them. People aren't empty vessels that you can just pour facts and theories into - education is an art too, and IMHO a *much* harder art to do well than programming.

  • You don't have to read it, or read it all, if it doesn't interest you. And for most coding, a deep knowledge of nonobvious algorithms is not really useful. But I think it is pretty rewarding to know about MIX and the timing of code. And every concept is explained carefully before you learn it, so you only really need a knowledge of highschool algebra.
    Take a closer look at the foreword, it answers your questions about what you need to know.

  • Why use TeX when MS Word is so wonderful? Why use gcc when Visual Basic is such a joy?

    TeX rocks. If you're a coder, you should be using TeX. Where else can you produce a document with source code? Using reusable include files? Using \def statements and conditional logic (\ifthenelse)? Comments in the source? Plus, nothing (and I mean NOTHING) approaches the beauty of mathematical equations laid out in TeX. Your subscripts, greek letters and limits-of-integration never looked better!

    As an added bonus, I have TeX input files that I wrote in 1988 that still compile today. That's older than some /. readers! Word 97 can't even read Word 6.0 files.

    When I write a document, I want it to last. TeX is for the ages.
    --

  • For those readers who haven't checked out the link yet, several people below have posted large chunks of the interview. It's not complete yet but I asshume it will be soon.
  • I'm required to take two calculus classes to get my degree here... which is OK by me with one exception.. we're led to beleive that the calcultor and Maple mysteriously find these anti-derivates to functions which "have no antiderivative" when you use our hand methods of finding them. Of course anybody knows that ANY function has an anti-derivative.. but to find it takes too many calculations to do by hand.

    Woah, be careful! Yes, it's true that "any function has an anti-derivative" (well, at least any function that isn't too weird). But it's *not* true that any function has an antiderivative that can be expressed in a sensible way in terms of elementary functions. Your teachers aren't withholding some kind of secret-antiderivative-finding method from you! You really do need to resort to computational methods to find most antiderivatives.

    Why can't we have a class on calculus for CS majors alone? Teach us how to "correctly" find the answers to these things... with code... not goofy rules that apply to a small number of practical applications.

    That's all antiderivative rules (or integration techniques) are. They're "goofy rules that apply to a small number of practical applications". The silver bullet that you're asking for doesn't exist. There are, of course, lots of algorithms that estimate values antiderivatives numerically--maybe that's what you're looking for. Those too have their limitations, and if you really want to learn about them that's a course in itself. But most calculus courses cover at least a few of those (Simpson's rule, etc.), and understanding the fundamentals of what derivatives and antiderivatives are and how they work is really more important for a first introduction to calculus.

    But I'd like to second the recommendation for "Concrete Mathematics"; it's a wonderful book, and worth your time, even if you're not required to read it. And it's fun to read too--lots of jokes and interesting tidbits--just take it slowly and don't expect to be able to read the whole thing at once. The math it covers is particularly useful stuff that tends to fall between the cracks in most introductory mathematics courses, so it would complement your calculus courses well.

  • I have been using Latex for a long time and have found this system to be useful for complex typesetting tasks.

    It is particularly well suited for typesetting articles, by performing tasks such as cross referencing of bibliography entries, equation numbers, figure numbers, and the like. In addition, I feel that the Computer Modern fonts appear more professional than the fonts from the leading brand word processor.

  • Oh, but they look so lovely on the book shelf. Also they really, really intimidate project managers. Finally when all else fails; if you throw one at someone, they stay thrown!

    Well you know you are intimidated when you can't even understand the symbols that are in the books.

  • The part on hypergeometic series has been mostly obsoleted by later work. See the book A=B (Amazon or FREE download).

    Thanks for the link I am checking it out now.
  • I organize my books in a special order with each slot denoting a different rank

    I keep my Knuth with my CD's. No particular reason... I just ran out of room in my bookcase.

  • When is The Art of Computer Programming going to be finished?
  • Good is all that's necessary then.
  • I'll second Concrete Mathematics. It's a rad book. As a person who does math for shits and giggles on saturday morning (nothing like bong-hits, a cup of coffee and number theory) this book is the shit. Super fun.
  • Wow, thanks for the link. This book (A=B) looks awesome. Any book that gets props from Knuth has gotta be good.
  • The point of the books isn't "Dummy's guide to programming", it's "Wizards guide to smart programming". 98% of all programmers don't ever need to know what's in these books. It's for that 2% of us who for one reason or another can't use the standard sort libraries, and need to know (intimately) what the different sort strategies and implementations are, and when and WHY to use one over another.

    For instance, do you know when to use a bubble sort instead of some variant of a binary tree sort? Didn't think so. And why, in certain cases, you would RATHER use a bubble sort than anything else? What's a stable vs an unstable sort? Most programmers don't care. But the use of the proper sort routine can speed up your program so that it runs in 1-5% of the time it might with a different sort routine.

    And why yes, I am a sort geek.

  • I guess they did add it; the answer, however, was totally unsatisfying. WTF? Slashdot is cached on google [google.com]. And it could be in a box next to the story, not in the story, which would not mess up a site's statistics any more than going down would.
  • This reminds me of a thought I had the last time I got a page 1/2-loaded when it got totally slashdot'ed: that is, why doesn't slashdot cache these things? It's got the resources/bandwidth to do it, it would ensure that you could actually get the value out of stories, and it would be very very nice to the poor webmasters to whose sites /. links. Just cache the links posted in the stories, and keep the cache around for a couple days. It'd be especially useful for those stories that google doesn't have in its cache :)
  • They still use the Hennessey & Patterson book for the computer architecture class I took a year ago. Frankly, I didn't even buy the book when I was taking it. We had to design a MIPS processor in that class. Pretty cool.
  • That stuff about sorting? I did all this sophomore year in college. We used Cormen Leiserson and Rivest book called Intro to Algorithms. It's pretty practical, but has a lot of theory too. It's got all algorithms you will ever need (why, yes, bubble sort too!).
  • Doesn't Crusoe have 128 bit (VLIW) instructions? Means the registers are 128 bit each. I think they combine several x86 instructions in one crusoe 128 bit "instruction" (when they emulate x86 that is). The Transmeta CEO said something like that on that webcast Stanford EE380 lecture. URL is http://stanford-online.stanford.edu/courses/ee380/ main.html if you want a streaming version of the webcast.
  • I'd post about the Knuth article, but of course I can't get it ;) Anyone with a mirror?

    GeorgieBoy posted a link to an ftp dir of Dr. Fun commics, but here is a link to an HTML front end for them here Dr. Fun [unc.edu]

    S.

  • Hmm, I actually got a text transfer that didn't go instantly! This is kinda weird for me, it used to be that way back when I used a modem (although even that would have had to be 2400 baud or less to make this happen)

    Still, it's kind of interesting that it happened, don't most web servers hold people in a queue and then process their entire requests all at once per person? What kind of server does advogato use?

    Interesting... ------

  • I wish somebody would inform my university's CS department of this. I'm required to take two calculus classes to get my degree here... which is OK by me with one exception.. we're led to beleive that the calcultor and Maple mysteriously find these anti-derivates to functions which "have no antiderivative" when you use our hand methods of finding them. Of course anybody knows that ANY function has an anti-derivative.. but to find it takes too many calculations to do by hand.

    Why can't we have a class on calculus for CS majors alone? Teach us how to "correctly" find the answers to these things... with code... not goofy rules that apply to a small number of practical applications.

    Of for Heaven's sake drop the calculus requirements all together and just teach us REAL math when it comes to CS and nothing else... we need far more of it... I've heard many many people say that Knuths books are wonderful and should probably be required reading for CS majors.

    Justin Buist
  • Exactly how many years has he been working on these things?

    Knuth started working on TAoCP as a doctoral student at Cal Tech in 1962. He was contracted by his publisher to write a book on compilers, and after four years of work he had a 3,000 page first draft written in pencil. With great foresight, the publisher didn't hassle him, but instead set up a schedule of release for seven volumes (Volume 4 is due in 2004, Volume 5 in 2009). The releases of new volumes was of course slowed down since Knuth devoted 9 years to developing TEX. The "big university" where he is a professor is Stanford.

  • It would be a good thing if the next person who manages to download the entire page (whenever that happens) sees if s/he can mirror it somewhere faster than this. I'd do it but I don't think I have time to wait for the rest of it (I can't stay logged in here when I leave), and I don't have a good place to mirror it (if I put it up here the sysadmins would get very upset if our server gets slashdotted too..).

  • What the heck is REAL math?

    And how do you "Correctly" find the answer to these things?

    The algorithms maple and mathematica use to find anti-derivatives are based on fairly complex series approximations (Taylor series? Laplace Transforms?). These are complex and there is no way to understand them without first understandinf the simple stuff like integral calculus.

    There are plenty of "calculus for CS majors" classes. Typically they are called things like numerical methods. But again you can't really understand these things without first understanding the theory they are based on (differential equations and linear algebra).

    If you are happy with the solution Maple gives you great. Don't take any more math. Buy don't assume that you can just read somthing like Knuth and get as much out of it as you could w/o doing the pre-requisite 3+ years of college math.
  • I'd like to take this opportunity to praise TeX. I learned TeX when I was writing my dissertation (nothing else does math worth a damn), and now I can't go back. Whenever I am forced to use something like MS Word (i.e. a wysiwyg editor), I'm always cursing and fighting the interface. And it usually looks like crap. TeX is simply a superior typesetting solution.

  • I don't think the intent of that statement was that calculus is useless for computer science, just that other math is important too.

    Really, though, I think your argument speaks on the usual education debate between "depth" and "breadth"; i.e. how much you know in a specific subject vs. how much you know in a variety of fields. One is important to be skilled in your field; the other is important to be a well-rounded educated person.

    I'm at Harvey Mudd College, a science/engineering school. Even though I'm a probable math or CS major, I still have to take such "irrelevant" classes as chemistry, biology, and quite a lot of humanities and social science. Studying acid-base equilibria will probably not help me in any job I'm likely to hold in the future, but it will help make me a well-rounded scientist. And such classes as music theory, literature or economics will spread the learning around even more. Let's face it -- it's good to know stuff, all kinds of stuff. Who wants to be a famous computer scientist but have no idea what a sonata is, or what For Whom The Bell Tolls is about, or why the Federal Reserve is raising interest rates?

    On a specific note, your comment about antiderivatives calls for some response. Of course every function has an antiderivative, but not necessarily one that can be expressed in closed form. A classic example is e^(x^2). Try to integrate it by elementary methods and you will find that you cannot. But plug it into Maple and you get -1/2*I*sqrt(Pi)*erf(I*x), which is all very well until you look up the definition of erf, and find that it's defined in part as the antiderivative of e^(x^2)! So it's really somewhat of a circular argument.

    As for your desire for a class to "correctly" find the answers to these things... with code.. -- it sounds like you're talking about a course in computer algebra systems, like Maple. I think this would normally be a graduate-level course, and would certainly require a lot more of the calculus and "goofy rules" that you despise so much. Like it or not, math is the foundation of quite a lot of science, and if your math background is poor, you won't be much of a scientist.

  • TAOCP homepage [stanford.edu]

    Put it this way, volume 5 isn't estimated to be done until 2009, when work on vol 6 and 7 will start:)

    -----------------------------------
  • Changing the subject a little bit, I never understood why for serious documents, available electronically, DVI hasn't become more popular. I see plenty of PDF documents, but they're big and the viewers are very slow.

    I don't know much about PDF, so forgive my ignorance, but it seems impossible to compose PDF documents in an editor. So, for web documents, the markup style of TeX/LaTeX fits with the markup style of HTML/XML. In fact, add hyperlinks and DVI seems so natural to put in the browser window. I understand Professor Knuth's remarks about browser sizes and so on, but let TeX adjust things for onscreen viewing. It certainly would be faster/smarter/better than current implimentations of redraws and table layout in current HTML browsers.

    Am I not understanding something? Why do we even need HTML, MathML, or stuff like that? Why not add hyperlinks to TeX? Ok, maybe it's too late, but why wasn't it a good idea five years ago?

  • Knuth[123]>Bible
  • Thanks for that mirror...pray that it doesn't go to /. frontpage... :P

    While I was reading this article, I felt the sudden urge to copy it into my /usr/share/docs . Stuff like this should be preserved for eternity. If you leave it to obscure little websites (as in "I don't know it") to archive and keep it accessible to public, lots of words of wisdom will be forever lost. IMHO that'd be a real pity. And a nightmare to historians of future ages. It's like nothing of importance is preserved from our time. Asides of commercials, of course (ever watched Demolition Man?).

    Now here's my question. Does anybody know of a central archive to keep things like these? Or should I store this thing locally and propably forget about it? Wouldn't it be k3wl to link to a service, perform a search on 'Don Knuth' and get a shipload of snippets and interviews like these?

  • ...didn't have the computing resources available that todays college graduates have. So I feel the field had a barrier to growth then that is starting to lift now (I mean academically,...
    I always though that CS was hardware neutral, but some how I feel you are proving both of us wrong.

    BTW: How long would you spend trying to solve a NP complete problem?

  • They are among the greatest of books, they hold the highest position on my computer book shelf (yes, I organize my books in a special order with each slot denoting a different rank, you got a problem with that)

    They are hard to read, read them twice, memorize where different things are. Learn enough mix to read the code and then rest assured that you have the tools to solve 99+% of the problems you'll ever encounter in the programming world. You have to have a love for the bits and the bytes to really and truely enjoy these masterful work's of art.

    The big thing is known how to read those books, knowing where to look in them, and then keeping them close by. As with all things in computers, it's more important to know where to look to find the solution than to know it. These books have most of the solutions you'll ever need and if you can hang with them then the rest of the solutions will be trivial in comparison.

  • The thing that we need to know is whether or not Transmeta is going to implement an emulator for MMIX.

    This would doubtless be crucial for Volumes 4-6 of TAOCP, and the availability of real hardware for this would strike fear into the hearts of graduate students in computer science the whole world over...

  • Advogato did a quick move to Berkeley's xcf, thanks to the help of Manish Singh. It ought to be able to handle the load the next time it gets slashdotted. Now we just have to come up with some more cool content :)
  • Doesn't Crusoe have 128 bit (VLIW) instructions?

    Yes, as far as I recall the Crousoe is a VLIW, with 128bit "instructions" (I think Transmeta calls them "molicules", but almost every other VLIW calls them instructions -- the IA64/Merceed/Itanum calls them "bundles", sort of -- it isn't really a VLIW)

    Means the registers are 128 bit each.

    No. It doesn't say anything about the register width at all. Nor do the Crousoe whitepapers. I expect that the integer registers are 32 bits, because they have to be to effecently emulate a 32bit x86 CPU. I don't expect they are any bigger because that would consume more power, and more transistors that could better be doing something else if your goal is to emulate a x86. I expect the FP registers to be 80 bits, but they could be a somewhat diffrent size.

    There are many diffrent bit widths in most CPUs, make sure you know which one is being talked about. For example, The PPro has a 128bit interface from the L1 cache to the L2 cache, but only has 32bit GP registers. The IA64 has 128bit wide "instruction bundles", but only 64 bit wide GP registers.

    I think they combine several x86 instructions in one crusoe 128 bit "instruction" (when they emulate x86 that is).

    Pretty close. A traditonal VLIW (like, say, the MultiFlow) has several "functional units", and each "instruction" tells every functional unit what to do. Sometimes you have to tell some of them to do nothing (nop - no-operation).

    For example, imagine a VLIW that has one load/store unit, two math units, and a branch unit... If you want it to do a load, an add, a multiply, and a branch you stick that all in one VLIW instruction. If you want to do two loads, a store, an add and a multiply you need three VLIW instructions, the load/nop/nop/nop, the store/add/multiply/nop, store/nop/nop/nop (you don't get to stick it in two just because you have nops -- you are out of the right flavor of instruction slots).

    I think the Crosue is one branch unit, and either two ALUs and one LOAD/STORE or two LOAD/STORE and one ALU. I forget exactly which.

    Many x86 instructions can be made into one or two slots of a VLIW instruction. Some others will take more then one VLIW instruction, but leave some slots open for parts of other instructions. For example the x86 'ADD m32,imm32' instruction (add 32bit constant to a location in memory) will take a LOAD slot (to get the memory value), then in another bundle (so the LOAD has time to complete) the ALU slot (to do the add), and then in another bundle the STORE slot (to write the results back) -- leaving 3 open branch slots, some open LOAD/STORE slots, and a few ALU slots as well. The code morpher may be able to fill the other slots. It may not be able to fill them all. Some x86 instructions may take multiple full VLIW instructions to carry out (like FSINCOS).

    The transmeta white papers have some really good examples of what their code morpher can do. But remember it isn't magic. The PPro, P-II, and P-III, K6, and K7 can all execute two x86 instructions per cycle. The K7 can execute three. Actually in rare cases many of them can execute more then two, but other then the K7 they can only decode two per cycle, so only the K7 has a hope of keeping it going. The Crosoe can't get ahead of the "normal" x86 CPUs just by doing two instructions per cycle sometimes, or three, or even rarely four, if it spends too much time doing one or less. Or even if it spends lots of time doing two on code that the P-II could also do two!

    VLIW isn't magic. There is a reason every single VLIW to date has been a comercial failure. Transmeta may have hit on a way to make it not flop. This definatly addresses several shortcomings VLIW has had in the past.

    The Transmeta CEO said something like that on that webcast Stanford EE380 lecture. URL is http://stanford-online.stanford.edu/courses/ee380/ main.html if you want a streaming version of the webcast.

    I'm sure he said the Crosue was a 128bit VLIW. I can't beleve the Crosue has 128 bit GP registers though. Sorry. Remember there are lots of diffrent bit widths in CPUs. Size of the integer/address registers is only one of them.

    My info comes from Transmeta's white papers (on theur web site). Hennesey and Patterson's Computer Archicture textbook (I got mine form the University bookstore in '92, there are probbably better books to read about VLIW from, but i don't know any). And last, but not least common sense.

  • The thing that we need to know is whether or not Transmeta is going to implement an emulator for MMIX.

    I can't imaging that the current Transmeta chips would be good at it. While they are not x86's they are designed to be good at emulating x86 code, not MMIX code. The underling needs are pretty diffrent. Even though the Transmeta chips are designed to be fairly efficent emulators of x86 code, and I doubt they have been designed to be poor emulators of other things, they will have a hard time emulating things sufficently diffrent from the x86. For example:

    • The Crousoe emulates the 32bit x86, so it almost certinly has only a 32bit ALU, so the 64bit adds on the MMIX (or Alpha, or V9 SPARC, or...) will require a 32bit add, a carry check, and another 32bit add. Multiplys are much worse.
    • The Crousoe emulates a machine with almost no registers. Internally it has about 40 (from the white paper) so it can do static renaming. It won't emulate the MMIX (64, or 256 registers, I forget which) very effecently that way. Nor the SPARC (~32 visable registers, but easy access to many more, plus modern implmentations do renaming with many more then 40 registers internally), nor the Alpha (32 int registers, plus 32 FP, and via renaming way more then 40).
    • The Crousoe does the Intel style psudo-IEEE (which Intel chose because (I think) the IEEE spec was incomplete), MMIX (I think) uses real IEEE, and SPARC and Alpha definitly both do.
    • The Crosue's MMU page table is exactly like the x86 page table (I don't know if MMIX actually has one, but the Alpha, and SPARC both do)
    • The "intresting" MMIX instructions that treat registers as a vector of 64 bit values and do linear algrbra style dot and cross product type things won't have an underliening implmentation, so they will need to be emulates the S-L-O-W way.

    That said the Crouse could probbably emulate the MMIX better then most other 32bit CPUs, but an Alpha, or V9 SPARC would do better. Maybe even the IA64.

  • Problem is that you do need calculus to understand the book at least some of the stuff I was seeing.

    You hardly need to be a calculus expert to understand what an integral sign means.

    Most of the interesting things (unfortunately) have a shit load of math behind them.

    I did calc in high school. It hardly constitutes a "shitload" of math. You're right that a lot of things need basic math. It's got nothing to do with "elitism" or "keeping the rabble out", or "keeping it expensive" -- I don't know about you but my high school education was free. It's got to do with the fact that the moment you start doing anything quantitative or analytical, you inevitably need some math. In other words, real scientists need to know some math. The fact that you can throw together some code does not make you a computer scientist.

  • Interestingly, this "budding theory" predates any computer language. Before a language may be conceived, we have to understand things like Turing Completeness, "regular languages", context/context free, lexicons, protocol, computational complexity, space/time constraints, discrete computation theory, etc.

    Unlike engineering, computer science was _born_ in theory.

    In the past it has not been "See, here's how you program in C/C++/Java. And this is what a compiler and OS do. And here's 4 other things. Now go practice coding. Soon you'll be a scientist/engineer." That is, unfortunately, our current trend.

    In the past we have seen great teachers such as Turing, Dykstra (sp), and Knuth.

    If anything, we should _return_ to the theoretical roots of computer science to provide understanding for our current practice.

  • Except that when Knuth started work on TAOCP, there was no such thing as C yet.
  • I remember poring over vols. 1 and 3 in college and grad school. Sometimes, I could grok the analysis but not the algorithm. I spent many hours untangling Knuth's unique style of presenting algorithms to come up with structured code that I could really understand. These are probably the densest books I've ever read. You can spend hours trying to work through a problem and understanding the five lines of solution in the back of the book.

    Those are the only books from that time of my life (over twenty years ago) that I still find a need to consult every so often.

  • Bah, it would make it easier for them. Having to build your own (M)MIX emulator/assembler/monitor should be part of the fun.
  • I'm heading up a technical group evaluating proposals for a new, hopefully complete, set of scientific/mathematical fonts [ams.org] that we plan to make freely available - obviously Knuth's well-reasoned opinions are highly relevant. What he suggests, that somebody should be out there sponsoring font designers, is exactly what we're trying to do! But it sometimes seems hard to persuade publishers to part with their money for something they won't fully control. Even those who make tens of millions in profits seem reluctant to spend more than a few tens of thousands on something that will be freely distributed - despite the fact that it will likely save a lot in licensing and other proprietary-based costs. Is this a strange psychological problem here?

    Anyway, we're trying to work with both the Microsoft side of things and the Mozilla/MathML people, plus support TeX of course. As an advertising plug - if you would like to contribute your thoughts or experience (or cash) towards the effort, send me a note at apsmith@aps.org [mailto].

    And many thanks to /. for highlighting this wonderful interview with Knuth.
  • Aw... Poor Raph! I have enjoyed the low-traffic / high-quality nature of advogato.org since the end of last year, and I was hoping that nobody would post a link to your site on Slashdot. Well, not only do you get a link to it on Slashdot, but even better than that: you make it to the front page. Ouch!

    That being said, I read Knuth's interview when it was published and I liked it very much.

  • I am all for information about computers but the trend I am seeing is that before you ever get a slight chance to learn anything cool you end up spending 20+ years studying the most dry uninteresting stuff imaginable.

    Meaning, that Computer Science is getting enough theory and history behind it where it resembles a real science (or engineering discipline). In the past, it has been more of: "See, here's how you program in C/C++/Java. And this is what a compiler and OS do. And here's 4 other things. Now go practice coding. Soon you'll be a scientist/engineer."

    Besides that, it's all cool. :)
  • One of the things that struck me when I was reading "Digital Typography" is the intensive study that you did, especially in the area of math typesetting. When I was writing papers, using math formulas in TeX, I just typed in the commands and out came the math and it looked pretty good to me. It shouldn't have been surprising, but it definitely struck me how much attention you paid to the best mathematics typesetting of past centuries.

    I do strongly think that people, when they start throwing computers at something, they think that it's a whole new ballgame, so why should they study the past. I think that is a terrible mistake. But also, I love to read historical source materials, so I couldn't resist. I had a good excuse to study these things, and the more I looked at it, the more interesting it was. But I don't think responsible computer scientists should be unaware of hundreds of years of history that went before us. So that was just a natural thing to approach it that way, for me.

    I noticed, for example, that in the proprietary software market for publishing, that systems are only today acquiring features that have existed in TeX for a long time, for example whole-paragraph optimization. There's a big to-do about Adobe InDesign, which finally...

    They finally implemented the TeX algorithm.

    Did they implement the TeX algorithm?

    Yeah, that's what they said.

    Did you talk to the people?

    I met three of four of them at the ATYPI meeting in Boston in October, but that was after I had heard about it, that some friends had found this in the documentation.

    Another similar issue is TrueType fonts. TrueType fonts have this property of including instructions, computer programs effectively, in the font, to do hinting.

    Well, I never met Elias or whatever.

    Sampo Kaasila?

    I don't know. I know enough about TrueType to know that it's a very intelligent design, that is similar to Metafont except that it strips out everything that's slow. So the way the hinting is done is by program, certainly. Of course, it came out maybe ten years after Metafont, so probably something got through somehow.

    There was the F3 font that Folio was making, if I can remember the name, what the people in industry called it. Some of the people that I had worked with on Metafont went into making font designs that were similar to TrueType, but have not been successful.

    There's a fairly major controversy with TrueType right now, that there a number of patents that are owned now by Apple. It's kind of interesting to me that that is the case even though it's for the most part derivative work of what was in Metafont.

    I've been very unhappy with the way patents are handled. But the more I look at it, the more I decide that it's a waste of time. I mean, my life is too short to fight with that, so I've just been staying away. But I know that the ideas for rendering... The main thing is that TrueType uses only quadratic splines, and that Type1 fonts use cubic splines, which allow you to get by with a lot fewer points where you have to specify things.

    The quadratic has the great advantage that there's a real cheap way to render them. You can make hardware to draw a quadratic spline lickety-split. It's all Greek mathematics, the conic sections. You can describe a quadratic spline by a quadratic equation (x, y) so that the value of f(x, y) is positive on one side of the curve and negative on the other side. And then you can just follow along pixel by pixel, and when x changes by one and y changes by one, you can see which way to move to draw the curve in the optimal way. And the mathematics is really simple for a quadratic. The corresponding thing for a cubic is six times as complicated, and it has extra very strange effects in it because cubic curves can have cusps in them that are hidden. They can have places where the function will be plus on both sides of the cubic, instead of plus on one side and minus on the other.

    The algorithm that's like the quadratic one, but for cubics, turns out that you can be in something that looks like a very innocuous curve, but mathematically you're passing a singular point. That's sort of like a dividing by zero even though it doesn't look like there's any reason to do so. The bottom line is that the quadratic curves that TrueType uses allow extremely fast hardware implementations, in parallel.

    The question is whether that matters of course, now that CPU's are a zillion times faster.

    But for rendering, Metafont was very very slow by comparison, although I'm amazed at how fast it goes now. Still, it has to be an order of magnitude better, and certainly that was a factor in getting TrueType adopted at the time that it was, because machines weren't that fast then. So TrueType was an intelligently chosen subset, but certainly all the ideas I've ever heard of about TrueType were, I believe, well known in the early '60s.

    Back to this issue of preserving the past. I was reading some papers of Edsger Dijkstra. For a while, he used handwritten manuscripts and then a typewriter to actually distribute the work. And, his notation became much more typewriter-like, that he would use an underlined A or a boldfaced A instead of the traditional \forall symbol.

    I've gotten some of his handwritten notes, but I don't remember the typewritten ones.

    I was looking at the proceedings of the Marktoberdorf summer school in '90, where there were a couple of papers by him and his group. In any case, it occurred to me that TeX has made the traditional mathematical notations so accessible to practicing computer scientists, students, researchers, etc. It's very likely that if there hadn't been something like TeX, in other words if mathematical typesetting had remained strictly in the domain of book publishers, and people who did publishing as their profession, it's likely that the standard notations in computer science would have become much more typewriter-like, kind of ASCII-ized.

    That's interesting.
  • Actually, as a current student of "the art," I can say that there is quite a bit more to CS than one may imagine. Analysis of discrete algorithms, combinatorics, and number theory do play quite a large role in CS (we use Cormen, Leiserson, and Rivest's Introduction to Algorithms, which I find to be an excellent book); however, there is also the formal logic side. My formal logic class used Manna and Waldinger's The Deductive Foundations of Computer Programming. Honestly, I didn't care for the class, and hated the book, but if you truly wish to follow "The Art," your best path would probably be to forget learning computer programming and languages for a while. Honestly, learning a new language is a process that should take at most 2-3 weeks (maybe more if you are moving from structured (C/C++/Pascal) to functional (Lisp) paradigms), but the thrust of computer science is hardly the syntactical usage of one particular programming language, nor has it ever been. When I write programs (currently working on an operating system, and have done 3D games, paint programs, and others in the past), my time division is about 35% design, 15% programming, and 50% realizing that I typed >= rather than >, or + rather than -. Design is entirely language and platform-neutral. I create a pseudocode as I go to explain what my algorithm, data structure, or function should do. That is what CS as "the art" teaches - design independent of all bounds. Programming is then the implementation of that design - it will teach you things like platform-specific memory alignment, cache optimizations, etc., and how to take your elegant design and make it scream on the targeted hardware. It's much easier to learn the facts for a hardware platform and implement a canned algorithm than it is to learn the philosophy behind algorithm design. Computer science will teach you design, computer programming will teach implementation, and experience teaches debugging. If you just want to write cool programs, then Knuth's (or Manna's, or CLR's) books are entirely unnecessary, and you would be better served by a reference manual such as the Red Book. However, if you do find the art interesting, then be prepared for a lot of seemingly useless math, esoteric concepts, and trips to the mental ward. I find most of it very interesting, but there is a reason my friends expect me to end up in a rubber room before I graduate.
  • Can somebody post an example of literate programming? It sounds interesting. Is it anything like Sun's JavaDoc standard? It sounds much more verbose and functional than JavaDoc, though.

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • Headline: Dr. Knuth and /. are wanted by the FBI for perpetuating a DDoS attack against www.advogato.org. The wicked combination of Knuth's name and /.'s front page all but splattered the funny tasting fruit ...
  • Most of the interesting things (unfortunately) have a shit load of math behind them. That's how they "protect" their profession from the "rabble" and keep it expensive and elite.
    Bull. Unlike politics or management (or post-modernism), there is no point to obfuscating the details of science and engineering. If there is mathematics involved, it is because you can't properly describe the field without it. Do you think you can describe the trajectory of an artillery shell without partial differential equations? Even closer to home, can you effectively talk about code without mentioning "structures" that have nothing to do with buildings, "functions" that have nothing to do with the uses of something, and "objects" which have no concrete existence?

    The language of calculus, of engineering, and of computer science is obscure to the uninitiated. This is not because it is obfuscated, it is because it is specialized and terse. It has to be specialized and terse to be sufficiently precise and accurate to convey concepts correctly and accurately. If it was to "protect" things, it would be constructed to blur the concepts and keep others (even initiates) from understanding them.

    The world isn't a simple place. Some things require specialized knowledge, and the specialists to go with it. Specialists give rise to jargon; it's unavoidable. Stop being so bitter and whining that it's not written on the level of Go Dog Go, and stretch your brain to accomodate it.
    --
    "There's a word for people who live close to nature -

  • These books are not meant for script-kiddie hedge wizards. They were written only for serious practitioners of The Art.

    Very technically I have been programming for at least 3 years now. I spent 2 years learning that dieing language called pascal (ever seen any apps w/source that used pascal as a major effort). I have been learning C++ for about a year. Genuinely I wish to practice "The Art" as you call it.

    However I cannot easily gain the equivelent of a Phd to read just one book (that is essentially unacceptable).

    I will check out the other books that have been recommended to me from the replies and see what they can do.

    I just get really ticked to be inferred that I am a script kiddie just because I can't program my version of the linux kernel in lisp or something.
  • After fighting through the /. Denial of service effect, I finally made it to the end of the interview. Honestly I had a hard time keeping up just with the interview, I haven't read any of his books but if this is any indication, I don't think I
    will. I know he has a brilliant mind, but the abstractness of the thinking approach is tough to keep up with and made the interview rather dry.


    I am about 17 hops away from the actual web server. How close are you exactly? Could you post the article in text form?
  • Of for Heaven's sake drop the calculus requirements all together and just teach us REAL math when it comes to CS and nothing else... we need far more of it... I've heard many
    many people say that Knuths books are wonderful and should probably be required reading for CS majors.


    Problem is that you do need calculus to understand the book at least some of the stuff I was seeing. Those little funny curly S things and those little ' marks sure don't look like calculus do they?

    Most of the interesting things (unfortunately) have a shit load of math behind them. That's how they "protect" their profession from the "rabble" and keep it expensive and elite.
  • I remember poring over vols. 1 and 3 in college and grad school. Sometimes, I could grok the analysis but not the algorithm. I spent many hours untangling Knuth's unique style
    of presenting algorithms to come up with structured code that I could really understand. These are probably the densest books I've ever read. You can spend hours trying to work
    through a problem and understanding the five lines of solution in the back of the book.


    They have updated versions of the books apparently using a RISC type machine. I was looking over the volumes and I thought he had released the others guess I was wrong. Exactly how many years has he been working on these things?

    I also understand that he was a professor of CS for awhile at some big university has anyone ever taken a class from him?

  • Oh, but they look so lovely on the book shelf. Also they really, really intimidate project managers. Finally when all else fails; if you throw one at someone, they stay thrown!
  • Incidentally what level of math expertice are they assuming? I have taken up through differential calculus and still hardly know a damn thing contained.
    The math is mostly explained in volume one. There is also a very good book "Concrete Mathematics" by Graham, and Patashnik, and Knuth that goes into much more detail.

    Here's the Amazon [amazon.com] link for reference (buy it wherever you want).

    The part on hypergeometic series has been mostly obsoleted by later work. See the book A=B (Amazon [amazon.com] or FREE download [upenn.edu]).

  • After fighting through the /. Denial of service effect, I finally made it to the end of the interview. Honestly I had a hard time keeping up just with the interview, I haven't read any of his books but if this is any indication, I don't think I will. I know he has a brilliant mind, but the abstractness of the thinking approach is tough to keep up with and made the interview rather dry.
  • The definitive examples of Literate Programming are two of Knuth's own works: "TeX, The Program" and "Metafont: The Program." Knuth wrote these using his original LP tool, WEB. That means he wrote the code part in Pascal, which in itself amazes me. =) These are available as books, and of course in the original source code format tex.web and (I'm guessing) metafont.web. You can get tex.web off any of the CTAN sites (http://www.ctan.org/). The output is really quite beautiful.

    Knuth now uses CWEB, an LP tool for writing programs in C, and C++ (though from everything I've read, it isn't all that hot at C++). You can get a copy of the program, which is itself an example of literate programming, at

    ftp://labrea.stanford.edu:/pub/cweb

    Look at common.w, ctangle.w, and cweave.w. When you run 'make doc' you get dvi files for the programs, which you can read with xdvi or kdvi.

    Many people who post to the LP Usenet group comp.programming.literate seem to use an LP tool called Noweb. It was developed as a language independent tool, and it does not do the pretty-printing WEB or CWEB do. However, it uses LaTeX as it's default typesetting language (which is simpler to use then raw TeX), and it can output indexed HTML pages.

    http://www.eecs.harvard.edu/~nr/noweb/

    There are examples of Noweb code on that page. Noweb is what I use for my perl and Java programming.

    The concept is not quite the same as Javadoc. I actually embed Javadoc comments in my own Java code even though I'm using noweb. The Javadoc is excellent at detailing API for the black box that is a method, or giving an overview on a class, but it isn't as good for detailed explanations. And the best thing about LP is that you can take a section of code, and abstract it into a sentence that you later expand into real code. At times, it is nice to be able to use that instead of a function.

    This is an example of some bits and pieces for an apache module I wrote

    [ ...Intro deleted... ]
    \section{Module Structure}

    A module is made up of the following parts

    <<*>>=
    <<include system header files>>
    <<include apache module header files>>
    <<preprocessor definitions>>
    <<declare functions>>
    <<define module hook-in>>
    <<define module functions>>
    @
    [...]
    \section{The [[check_uptime]] function}

    Now that we're hooked into the \Apache\ module list, we define
    [[check_uptime]]. This method takes a [[request_rec *]] and returns a
    status-code declaring whether or not the request is allowed to proceed.

    <<define module functions>>=
    /*
    * returns HTTP_INTERNAL_SERVER_ERROR if the time elapsed between
    * this request and the server start-up time is not greater than
    * warmup_time; otherwise return DECLINED.
    */
    int check_uptime(request_rec * r)
    {
    <<declare [[check_uptime]] variables>>
    <<set [[warmup_time]] from env variable [[MOD_WARMUP_SEC]]>>
    <<return DECLINED if [[warmup_time]] == 0 or if this is not a /cgi/ request>>
    <<set up shared memory segment [[shm_id]]>>
    <<set [[startup_time]] from [[shm_id]] or [[r->request_time]]>>

    /*
    * if warmup_time seconds have elapsed decline; else deny
    */
    sec_left = ( warmup_time - ( r->request_time - startup_time ) );
    if ( sec_left <= 0 )
    {
    return DECLINED;
    }
    else
    {
    <<write log entry>>

    /*
    * This code returns a ``Temporarily Unavailable''
    * page on HW servers
    */
    return HTTP_INTERNAL_SERVER_ERROR;
    }
    }
    @
    [...]

    And the other parts are expanded with their own sections. Noweb will auto-index this because it's a C program, and I end up with an indexed program that has a TOC and nicely formatted explanations with each unit of code.

    I can take each unit of code and write, in however much detail I want, an explanation detailing the what and why (and even use a proof if it's a complex algorithm). TeX/LaTeX also allow you to put in real book references if you are implementing someone else's idea out of a book or paper.

    Many times, just having the ability to use a real sentence is a help. Using "return DECLINED if [[warmup_time]] == 0 or if this is not a /cgi/ request" is much nicer than using "check_early_decline()" or something. It's clear to the reader what is going on. They don't have to jump down to the function or section or whatever just to read the details of what that method does. Each unit of code is very readable by itself, and concentrates on one or two specific ideas.

    Jim
  • by raph ( 3148 ) on Thursday February 17, 2000 @08:05AM (#1265434) Homepage
    The site is back up now, but slow. There are two problems; one expected, one not. The expected problem is that Advogato only has a 128kbit/s upload (on an ADSL). Thus, it's gotta be really slow to the outside world.

    Second, the network has gone down twice. I have no idea why. The NIC is a Tulip something-or-other. In both cases, /etc/rc.d/init.d/network restart brought it back up.

    The Advogato server code (mod_virgule) is handling the load fairly nicely. The load average is hovering around 0.65, and memory usage is quite reasonable. This is in spite of the fact that all pages are being rendered dynamically from XML.

    It would be interesting to try to host the site on a really high bandwidth line.
  • by raph ( 3148 ) on Thursday February 17, 2000 @06:52AM (#1265435) Homepage
    Advogato is on the slow end of a DSL. I was hoping that when it got slashdotted, somebody would put up a mirror. Oh well. I'll see what I can do to nurse the machine through the day.
  • by Chris Siegler ( 3170 ) on Thursday February 17, 2000 @07:24AM (#1265436)

    Incidentally what level of math expertice are they assuming? I have taken up through differential calculus and still hardly know a damn thing contained.
    The problem is that Calculus doesn't help you in analysing algorithms, which is what CS is all about. So what to do? The book Concrete Math [fatbrain.com] is an expansion of the first part of volume one, and is a much much easier read. After reading it, you'll be set to tackle all the math in Knuth.

    Of course it's not a small book, and it's hard to get motivated to learn something when you don't know WHY you need it. So I'd suggest just skimming the math in Knuth and work on MIX and the programming stuff, then go back later.

    In other words, eat your cake and ice cream first, and then tackle the broccoli.

  • I took the time and looked at a couple of his books. However I noticed really fast that in fact I could hardly understand a fraction of what was therin contained.

    You definitely need to approach these books with the right attitude. They are not light reading by any stretch of the imagination. Each section requires careful thought for complete understanding.

    You also have to get used to Knuth's writing style. I was reading a section on floating point representation, and came across a sentence that read

    The radix can be interpreted as being on the extreme left (liberally) or on the extreme right (conservatively).
    I don't know how long I stared at that sentence trying to figure out what he was talking about. Finially, I realized he was making a joke! (For those who don't get it, in the US, political liberals are described as being "on the left" and conservatives "on the right". Why, I have no idea...)

    -y

  • When is The Art of Computer Programming going to be finished?

    I took the time and looked at a couple of his books. However I noticed really fast that in fact I could hardly understand a fraction of what was therin contained.

    Incidentally what level of math expertice are they assuming? I have taken up through differential calculus and still hardly know a damn thing contained.

    I am all for information about computers but the trend I am seeing is that before you ever get a slight chance to learn anything cool you end up spending 20+ years studying the most dry uninteresting stuff imaginable.
  • by mko ( 117690 ) on Thursday February 17, 2000 @11:20AM (#1265439) Homepage
    Since nobody has posted the whole interview yet, I've put the page up here [rwth-aachen.de].
  • by Nate Eldredge ( 133418 ) on Thursday February 17, 2000 @11:29AM (#1265440)
    I've got the file now; here [hmc.edu] is a mirror. I suspect my college's network can handle the load better than an ADSL.

    Enjoy.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...