Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Claude E. Shannon Dead at 85 93

Multics writes "It is with considerable sadness I report the death of the parent of Information Theory -- Claude Shannon. Here is his obituary over at Bell Labs. He was 85."
This discussion has been archived. No new comments can be posted.

Claude E. Shannon Dead at 85

Comments Filter:
  • 85 isn't a bad innings (ref. cricket score). Sad to see visionairies go but it's not like he had a short life. In that regard I would have felt more affected by Alan Turing's death.
    --
  • by gowen ( 141411 ) <gwowen@gmail.com> on Tuesday February 27, 2001 @02:52AM (#399172) Homepage Journal
    Why the US Patent Office has granted patents [gailly.net] that violate his "so-called" theory, and they never make mistakes
  • by Uri ( 51845 ) on Tuesday February 27, 2001 @02:56AM (#399173)
    ...check out Shannon's classic 1948 paper "A mathemtical theory of communication". It is available in postscript [bell-labs.com] (460 Kb), gzipped postscript [bell-labs.com] (146 Kb), and PDF [bell-labs.com] (358 Kb) formats, here [bell-labs.com] on the Bell-Labs [bell-labs.com] site. Warning, though: it's 55 pages, including 7 appendices.
  • Claude Elwood Shannon was born in Petoskey, Michigan on April 30 [1916]. One of his childhood heroes was the inventor, Thomas Edison, whom he later learned was a distant cousin. Both were descendants of Colonial leader John Ogden.
  • by Alien54 ( 180860 ) on Tuesday February 27, 2001 @03:00AM (#399175) Journal
    Shannon was renowned for his eclectic interests and capabilities. A favorite story describes him juggling while riding a unicycle down the halls of Bell Labs.

    And we wonder where people get the ideas to do things like this.

    Obviously this sort of thing has a long standing tradition.

  • One quote from the obituary interested me:

    "Another example is Shannon's 1949 paper entitled Communication Theory of Secrecy Systems. This work is now generally credited with transforming cryptography from an art to a science."

    By providing a firm mathematical basis Shannon managed to turn cryptography from a messy discipline without any real means of proof into a real science where mathematical methods could be used to ensure the security of cryptographic systems. Today, we wouldn't dream of thinking of cryptography as an art, it is well-established as a branch of applied mathematics.

    But despite similar techniques being available for programming, coders still stick to their "tried and tested" methods of programming which owe less to science and more to guesswork. Rather than using formal techniques to ensure that code is 100% correct, they would rather just sit down and knock something out that sort of works. And this has led to the proliferation of buggy and insecure software we see today.

    Why is it that programmers feel that they are somehow above the need for these techniques? It seems to me to be pretty arrogant to avoid using tools that would make their code better, almost as if they would rather not have perfect code. They would sneer at a cryptographic system that hadn't been rigourously proven, but be happy with any old hack that another coder knocks out whilst eating cold pizza and drinking Mountain Dew.

    Why is there such a reluctance to produce properly proven code, when they demand it for other algorithmic systems?

  • Programming is Logic
    Crytography is based on Mathematics so it only makes sense that it is based on mathematical methods

    Human logic and trains of thought can't be equated into mathematic equations.
    Do you even program at all?

    NB. When you get round to writing the formula that will knock out an OS in perfect time, first off with no bugs, I will send you a check for everything I own. Until then shove your head up your troll ass

  • by thycotic ( 143142 ) on Tuesday February 27, 2001 @03:13AM (#399178)
    This guy truly was quite revolutionary in his thinking regarding chess playing! I am currently developing some chess playing software ... during my research I was astounded by the fact that his theories in this field are still the cornerstone of the game playing behind most modern chess programs!

    The IBM Deep Blue site mentions him ...
    http://www.research.ibm.com/deepblue/meet/html/d.3 .3a.html [ibm.com]

  • Why is it that programmers feel that they are somehow above the need for these techniques? It seems to me to be pretty arrogant to avoid using tools that would make their code better, almost as if they would rather not have perfect code. They would sneer at a cryptographic system that hadn't been rigourously proven, but be happy with any old hack that another coder knocks out whilst eating cold pizza and drinking Mountain Dew.

    I agree! Why do stupid humans try to program the computer? Furthermore, I also think hardware engineers should give up - after all, all this hardware could be designed perfectly and automatically by machines.

    Er, wait a minute...
  • Why the US Patent Office has granted patents that violate his "so-called" theory, and they never make mistakes

    NB - Related Pages here [gailly.net] and here [gailly.net]

    Well, the major difficulty I see here is that they will have subtle problems trying to making it commercially viable, since they are robbing peter to pay paul, so to speak. They would never really get off the ground. They would have major hassles licensing it to a major player, for example.

    Moral/legal considerations aside.

  • by Anonymous Coward
    As an engineering student I only passed the class in information theory by th skin of my teeth. As a network engineer I use (or misuse) his insights (bandwidth, datarate, ...) on a daily basis. Truly a giant has passed.
  • Back in '91, (Through emails)I spoke to Mr. Shannon a few times while researching Crytography and the applied Mathematics involved for my Thesis. I trully respect him for the fact that he took time out of his schedule to actually respond to me. Through his emails, I found him to be sharp witted, humorous, and charismatic. I'll miss him.
  • The main problem of software development is the messy area of knowing what the program is supposed to do in the first place. Once you know that with 100% certainty, writing a program to do it is (infeasible/intractable problems aside) relative childsplay - formal methods or not. You can't prove a program's correctness if you don't have a definition of 'correct'...

    --
  • by localroger ( 258128 ) on Tuesday February 27, 2001 @03:32AM (#399184) Homepage
    Why is it that programmers feel that they are somehow above the need for these techniques?

    Because these techniques don't exist. This was proven by Alan Turing in his paper Computable Numbers. The only way to "prove" a piece of software is to run it.

    Oh, you're probably thinking of things like OO and top-down and all those gimmicks they teach in CS courses. Well, sometimes those techniques help you write better code (at a cost) and sometimes they make you write worse code (because of the costs).

    Top-down, for example, makes sense when you have 100 programmers doing the software for a bank. It makes no sense at all and results in inferior code and user interfaces when you have 1 or 2 guys writing code for a PLC.

    A line pops into my head from Greg Bear's book Eon. A character from a civilization 1,200 years advanced beyond ours is explaining to a contemporary person why they still have conflict. To paraphrase, there are limited resources and even very wise and educated people will inevitably come to have different ideas about how to solve problems. Each will want to use the limited resources in his way, because he believes it is the best way. Thus there is conflict.

    This is why the language, OS, and technique wars will always be with us. What works in a palm does not work on a desktop, neither works in a PLC, and none of those things works for an enterprise server. And human beings are still subtle and complex enough compared to machines that attacking a problem of the right scale as an "art" will produce superior results.

  • when are we going to establish a computing hall of fame? might be a member of the inagural class?
  • by gattaca ( 27954 ) on Tuesday February 27, 2001 @03:33AM (#399186)
    ...they just descend into a state of increasing disorder.
  • Everyone here knows the programming references in this troll are nonsense, but just a note on the crypto side.

    For the most part, mathematical methods do not *ensure* the security of our systems. We have no proof that, say, AES is secure; all we know is that the world's greatest cryptanalysts have attacked it as hard as they can, and come to the conclusion that there won't be a practical break for a long time to come. In theory, it could be heinously broken tomorrow. We don't have a proof that it won't be, just decades of experience in attacking cryptosystems and finding out what makes them weak in practice.

    Cipher design is a mixture of a science and an art. There's certainly artistry in the design of AES!
    --
  • that Shannon did a lot of work on entropy.

    Does this mean that research into perpetual motion would be a way of cheating death for the scientists involved?

  • Ah but you missed an important detail, grasshopper. The algorithm can compress any file by one byte, it is true, but any possible implementation is of infinite size.
  • "But despite similar techniques being available for programming, coders still stick to their "tried and tested" methods of programming which owe less to science and more to guesswork. Rather than using formal techniques to ensure that code is 100% correct, they would rather just sit down and knock something out that sort of works. And this has led to the proliferation of buggy and insecure software we see today. "

    You dont program, do you.
    Programming is usually an iterative process - write a bit, test it, change the spec a little, write some more etc etc. This is before the user gets to see it. Then it goes through another set of the above.

  • when are we going to establish a computing hall of fame?

    http://www.computerhalloffame.org [computerhalloffame.org].

    Beware! It's very flash-intensive.
    ---

  • by Anonymous Coward
    ((((((You(obviously)don't((know)much)about((LISP)o r)))you))wouldn't)make(such((a)naive))remark)
  • Turing did not prove that you can't prove any software. Turing did prove that you can't prove all software.

    There's a huge difference. For many problems you can build a solution that is provably right. On the other hand, given a "random" program you can't generally prove it; ie. there is no systematic method to "prove" software, but there are methods to turn some provable methods and descriptions into programs that keep the same properties. That's what formal languages are about.

  • by ch-chuck ( 9622 ) on Tuesday February 27, 2001 @03:58AM (#399194) Homepage
    Another important idea is his application of Boolean algebra to switching circuits - as in this [mit.edu] paper - "A symbolic analysis of relay and switching circuits".
  • by localroger ( 258128 ) on Tuesday February 27, 2001 @04:01AM (#399195) Homepage
    As the man who single-handedly turned cryptography into a science, Claude E. Shannon is in many ways responsible for the paranoid nature of today's society.

    While Shannon's contribution to crypto was large (like his contributions in other fields), this is an overstatement which greatly distort's Shannon's own motivations.

    Shannon's article was the capstone on a body of work which was mostly created during World War II by people like Alan Turing and John von Neumann, as well as dozens of equally important but less famous people who worked at places like Bletchey Park.

    If you want a "sinister figure" why not pick on John von Neumann, who instead of riding his unicycle in the corridors was working hard to make the H-bomb a reality (alongside the original Dr. Strangelove himself, the truly sinister Edward Teller).

  • ...for Lots of Irritating Single Parentheses.
  • for that alone, RIP
  • There is a large field in computer science dedicated to research about proving properties of programs, it's called formal methods. However, in practice this is still only used for hardware and very low level software.
  • ... The NY Times [nytimes.com] gives his birthdate as April 30, 1916. That'd only make him 84. Not that this is really all that important.
  • Talking of cricket I am reminded of the death of the most respected cricketer of all times: Sir Don Bradman.

    Story [sify.com]

  • The New York Times Obit can be found here [nytimes.com]. Of course this will expire way to quickly because the NYT is only the newspaper of record for the current week.
  • by localroger ( 258128 ) on Tuesday February 27, 2001 @04:23AM (#399202) Homepage
    ...and that word is trivial. Real-world applications quickly achieve levels of complexity that make them unprovable either in a practical sense or theoretically, depending on the application.

    Formal methods allow you to reduce larger problems to "trivial" status than you can hold in your head, or to distribute them across multiple programmers without affecting their triviality. When the scale of the problem and the resources available to solve it are of the right size, this makes such methods appropriate.

    Formal methods have two major costs. The first is that you can't easily back up and punt; when you realize half-way through that the formal design which looked good on paper is eating resources or presents a poor user interface in practice, it is very hard to go back and improve things. The second is that they increase resource requirements across the board, because you cannot apply creative solutions which become apparent midstream to streamline the project.

    Sometimes the need for a formal method is due to the use of another formal method. For example, if you use top-down (appropriate for a large project) you are forced into all-up testing, which means you need strongly typed languages, etc. or you will be debugging forever. You can program bottom-up and prove each module as a trivial exercise as you build to higher and higher complexity, but your final project (while efficient and bug-free) may not resemble the formal spec much. This may not be a problem for a game; it may be a big problem for a bank.

    Strongly OO languages like C++ which dereference everything eat processor cycles. It may not be apparent why this is a problem until you try to implement a 6-level interrupt system on an 80186 (as one company I know of did).

    So yes, it is possible to prove some software, but this is not true of most software on a useful scale. When dealing with interrupts or a network or multithreaded or multiuser anything, there are so many sources of potential error that the formal system will bog down. You simply cannot check everything. You end up with either your intuition or no project.

    Turing was interested in the problem of duplicating human intelligence. Thus, he aimed his theories at a high level of general abstraction, an "umbrella" which would cover his goal no matter what its eventual form might turn out to be. One result of this is that Computable Numbers is sometimes cited to support the concept of free will. Another is, as computers get more and more complex, Turing's take on the situation becomes more accurate and useful. His point was that even a fully deterministic machine can be beyond deterministic prediction. It's a shame he didn't live to see his vindication by Mandelbrot.

  • I pulled the age from the Bell Labs obit who obviously computed it incorrectly.

    --Multics
  • Or compress any two bytes by one byte, for that matter, on average. Or one bit.

    Or one petabyte by one bit, on average, for that matter.
  • > Rather than using formal techniques to ensure
    > that code is 100% correct, they would rather
    > just sit down and knock something out that sort
    > of works. And this has led to the proliferation
    > of buggy and insecure software...

    ...and multibillion-dollar corporations. I wish I had knocked out some buggy, insecure code that made me a billionaire in an overnight IPO.
  • Rather than using formal techniques to ensure that code is 100% correct, they would rather just sit down and knock something out that sort of works. And this has led to the proliferation of buggy and insecure software we see today.

    Correctness proofs were really big when I was starting to get seriously interested in comp sci in the early 80s. Back then one of the motivating factors (at least in funding) was the perceived need to design systems of unprecedented size and complexity. There was considerable talk about space based ballistic missile defense systems. People realized early on was going to rely on software systems that had to be more complex and better performing than anything we'd ever built yet.

    I have limited faith in the ability of proofs to show that entire systems such as SDI will work, since the proofs themselves are themselves likely to be incomprehensibly bulky and the requirements documents unreliable guides to real world needs. I do think proof has some merit in improving the quality of system components. But in realistic development projects you put the effort where the greatest benefit is derived, for example in better understanding user requirements and input characteristics. I'd guess that energetic maintainability reviews would catch the vast majority of mistakes that mathematical proof would. True, this is reliance on art -- but mathematics itself is an art.

    My personal take home lesson as a programmer from the mathematical proof movement was to look at control flow within a function differently. Instead of looking at a section of a function, such as a loop, in minute detail, I tend to think now in terms of how the situation is changed when the loop exits from when the loop was first entered. I view code in larger chunks than individual statements. In a way, think about sections of a function they way I'd analyze a recursive function. This is not at all the way people are taught to program, which is to parallel the operation of the computer in their minds.

    Getting back on topic, I was surprised to see this article on Slashdot, because even though I've been programming for a long time, I had supposed Shannon to be long departed because of his incredible papers of the 40s -- I tend to put him up with Bohr and Turing and other giants of early 20th century science. I don't work with communications theory, so I my work doesn't explicitly and technically rely upon his. But over the years some of Shannon's insights have been like spiritual guides for me, for example the need for a common frame of reference when communicating with somebody. The Shannon coding theorem taught me about the marginal effort required for perfection vs. practically sufficient performance.

    Which, in a way, brings me back to my attitude on the off topic topic of correctness proofs.

  • Problem is, in general for any software complex enough to be useful, proving it correct is just as error prone as writing code (if it is even possible at all). Maybe moreso - you have to prove that the code matches the mathematical model, then demonstrate correctness within the model, so there are two very distinct areas for error.

    To ensure that the proof is correct, you have to use many of the same techniques that you could just apply to the code.

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • You not so smart...

    True, but irrelevant. Intelligence has limited practical application. Sometimes you have to be ignorant of what cannot be done so that you can accomplish the impossible. I don't know much, but everything I do know I learned from the reading motivational posters put up by my employer.

    How do you compress a file of size 1 byte by 1 byte

    Ah, that is part of my proprietary, patent pending method. Did I mention that my algorithm is not guratneed to halt? (Note that by the law of the excluded middle, anything that cannot be proved false must be true.)

    By the way grasshopper -- a sequence doesn't have to be one bit long to be irreducible (by means other than my patent pending non halting algorithm of infinite size of course). Consider this riddle -- what is the definition of a random sequence?

  • I learned to rollerblade between the (softly-padded) cube walls of Lucent
    8-)

  • ack! then its no hall of fame of mine! :)
  • You can prove that a particular program implements a specification correctly, VDM and Z people have been doing this for a long time.

    You then run into the problem of proving the specification is correct...
  • Yes, but you're failing to take into acount the extra 3 months in his life "missing" from our time scale, from when he was taken aboard an alien spacecraft by special arrangement with the DoD.

    --
  • Relax and click around a bit at the site. You'll find a non-Flash version if you look for it, and it's worth seeing. Annie.
  • You have marked yourself too stupid to use a computer.

    Shannon's work provided the groundwork for programs you use all the time, whether you know it or not!

    Data compression
    Huffman's compression algorithm reads like an appositive to Shannon's work.

    Error Correction
    Like your data to arrive in one piece? No corruption?? Thank Shannon for providing the groundwork for efficient insights to error correction.

    Encryption
    Without Shannon's take on information theory, encryption would be the same hunt-and-peck game that was played during WW2.

    Have a little respect for something you couldn't do.

  • you have to prove that the code matches the mathematical model

    Not. The idea is to translate automatically from model to code using robust, time-proven translators (in the same way that one can more or less trust a non-RedHat gcc-O2 to translate C code correctly, at least on an x86; or in the way you trust the CPU to accurately execute the machine-language instructions you feed it). Interesting properties are proven on the model.

    Perhaps you thought of, say, UML when you read "formal methods"? That's where the disagreement comes from. UML is not a proving tool, it's a design helper, roughly as provable as C (that is, in the real world, not provable at all.)

    Automatic translation of proven models is used in embedded software for critical stuff, e.g. antilock brakes, in protocol design and implementation, and such stuff.

  • <p>There is another obit (a little longer) that is in the NYTimes (<a href="http://www.nytimes.com/2001/02/27/nyregion/2 7SHAN.html">here</a>). It says that he was 84. In order to help forward the cause of hairsplitting: apparently he was born in April 1916. Even though 1916 is 85 years ago, he wasn't born until April, so he was still only 84.</p>
    <p>I am really sorry that I never go to meet him, he is certainly one my heros. His picture and Richard Feynman's are above my desk. I hope he makes it onto a postage stamp some day. I don't think a lot of CS people realize that without Shannon, they wouldn't know about Boolean algebra (well this is a bit of an exageration), but he was the first one to really make the link between boolean algebra and it's use in circuits and then computers. He was actually in the MIT electrical engineering department, I believe. Even though he was really a mathematician.</p>
  • The first is that you can't easily back up and punt; when you realize half-way through that the formal design which looked good on paper is eating resources or presents a poor user interface in practice, it is very hard to go back and improve things.

    Indeed ! Proving user interfaces is unreasonable in the first place (what exactly are you proving then?)

  • The idea is to translate automatically from model to code using robust, time-proven translators...Interesting properties are proven on the model.

    Sounds like you're talking about code generation, not verification. That's a different kettle of fish - essentially, you're picking a programming language that better matches your problem domain, so that the code becomes tractable enough to prove. A neat idea, but it requires that a model/language and code generator that are useful for your problem domain already exist.

    Perhaps you thought of, say, UML when you read "formal methods"?

    No, I was thinking of the proofs of "while" loops I sweated over back in school. B-)

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • Digital signals and boolean calculas seems "obvious" to
    us now, but not so in the 1930s when a binary
    switching tube cost dollars instead of micro-cents
    today.
    Was digital a "pregnant idea" about to be discovered
    by nyone half-way smart,
    or would it take a really smart person to accelerate the discovery?

  • by Azog ( 20907 ) on Tuesday February 27, 2001 @06:27AM (#399220) Homepage
    Actually, LISP's funny interpretation is "Lots of Irritating Superfluous Parentheses".

    Check it out at the Jargon File:LISP [tuxedo.org]

    Somewhat more on topic, Shannon is mentioned directly in the Jargon file, where we find that [tuxedo.org]
    Claude Shannon first used the word "bit" in print in the computer science sense.

    Also, I just read the book "Crypto" by Steven Levy and was surprised to find out how critical Shannon's work on information theory was to early non-government crypto research. If it wasn't for his papers, it is likely that the US government would still have the only really strong cryptography in the world.

    He will be remembered.


    Torrey Hoffman (Azog)
  • The last paragraph is fantastic.
    Turing was interested in the problem of duplicating human intelligence. Thus, he aimed his theories at a high level of general abstraction, an "umbrella" which would cover his goal no matter what its eventual form might turn out to be. One result of this is that Computable Numbers is sometimes cited to support the concept of free will. Another is, as computers get more and more complex, Turing's take on the situation becomes more accurate and useful. His point was that even a fully deterministic machine can be beyond deterministic prediction. It's a shame he didn't live to see his vindication by Mandelbrot.
    So many buzzwords. Relating the halting problem, artificial intelligence and fractals. Too bad it doesn't mention chaos theory, non-linearity and P=NP... :-) It's a nice way to sum up an argument that one has absolutely no idea about.

    Remember that Turing's result works only for deterministic machines with infinite memory. In practice the halting problem is computable. Of course it doesn't mean it's practical to do so.

  • A character from a civilization 1,200 years advanced beyond ours is explaining to a contemporary person why they still have conflict. To paraphrase...

    ... it's because Greg Bear is a tedious hack who can't think of anything better. Eon has to be the worst SF book of the 80's. I did have some fun reading the worst bits out to friends but that was it.

    TWW

  • I was thinking of the proofs of "while" loops I sweated over back in school. B-)

    Did that too.. worthless as a tool but a nice introduction to more interesting stuff; it's a bit like the ubiquitous toy compiler project in CS studies: everyone does that someday and everyone buries the result and uses gcc instead past the deadline. :)

  • And it's actually readable if you have a really solid math background. (This isn't intended to be a vacuous comment: a lot of mathematical papers are only readable by specialists and not even by other mathematicians working in other areas.)

    IMO the really amazing gem in this paper is the noisy coding theorem. Shannon proved that if you have a communication channel that introduces errors (i.e. a noisy channel) then that channel always has a channel capacity. If you send data at a rate lower than the channel capacity then you can prove that there are coding schemes that result in a average probability of a bit error being as small as you like. (However smaller error rates mean more complicated codes). Unfortunately Shannon's proof was nonconstructive. He essentially showed that if you transmit the data in large blocks and encode those blocks randomly you get a probability of a bit error that goes to zero as the block size increases. These codes do not require ever increasing redundancy to achieve this.

    Unfortunately decoding a random encoding applied to a huge block of data is not computationally practical. A major thread in information and coding theory has been trying to find tractable codes that behave somewhat like Shannon's codes. It's a hard problem, but your modem works better because of it.

    Personally I like this result because it was so counter-intuitive: who else would have thought it might be possible, even theoretically, to construct codes that make the error rate approach zero without introducing ever more redundancy into the code?

    Shannon was a giant in engineering, computer science and mathematics.

  • One thing the Times obit did not mention was Shannon's interest in juggling late in his life. He learned to ride unicycle and juggle balls at 70+. I remember going to an international juggling convention in the '80s and seeing
    film of some of his juggling simulator machines -- very clever. Some of my juggler/computer jock friends were astonished that he was still alive and kicking back then, having only seen his seminal work from relatively early in computer history.
  • Just remember, whenever you see anti-homosexuals, anti-geek, ant-(younameit), remind yourself that it was that type of thinking which killed Alan Turing.

    People should be judged on their positive accomplishments alone.
  • LISP is NOT the lambda-calculus. LISP does not use complete normal order evaluation. LISP is just a programming language, which later started moving more towards advancements already made in computational theory. The original intent of LISP wasn't to make a purely functional programming language (read normal order evaluation language based on the lambda-calculus).

    This will probably upset people, but it is more fact than opinion: LISP was and is still a hack. That is why ML and Haskell were made. LISP is not the end-all-be-all functional programming language. It has flaws.

    While I see the historical value of LISP, I do believe that it has become obsolete and convoluted over the years. Any sophisticated programmer will see this within a week of using one of the latest and greatest FP languages, such as Haskell or Clean.
  • The moderators are obviously ignorant of the theory of computation, as set forth by Alan Turing. The parent post is so blatently wrong, that it makes me believe that the original poster didn't even read Turing's original work.

    Ok, moderators, if you don't understand something, DO NOT MOD THE POST!!!
  • From the article:

    These activities bear out Shannon's claim that he was more motivated by curiosity than usefulness. In his words "I just wondered how things were put together."

    Nowdays his curiosity could easily land him in trouble. Figuring out how things are put together seems to be a big no-no in the US today. Gee, you might discover all sorts of things big companies don't want you to know - like how crappy is their encyption, how stupid their blocking software, or how lame their security.

  • The rule is that they will not issue a patent for a perpetual motion machine without a working model .

    So everyone get out their hammer and saw and start building a machine that runs forever. You need more than just an "idea"... You actually have to be able to produce one and show it to them.

    They shouldn't grant a patent that gaurantees compression by at least 1 bit without a working program which should be run on itself once for each bit it has so that they can prove to us that every piece of data in the world can be reduced to 1 very random bit.

  • As the man who single-handedly turned cryptography into a science, Claude E. Shannon is in many ways responsible for the paranoid nature of today's society.

    [further rantings munched]

    (I know this is a troll, but denigrating the memory of a well-respected scientist is cowardly and inexcusable.)

    Your post is unadulterated nonsense. Americans have had a paranoid culture all the way back to before the Founding. Like it or not, a lot of our core national makeup stems from Puritan attitudes. One of those attitudes was paranoia (Salem witch trials, anyone?).

    Your assertion that people were open and honest before the 20th century is ludicrous on the face of it. History is chock full of betrayal, double-dealing, and all the other fun stuff that wackos and trolls such as yourself claim sets us apart from the march of ages. Besides, it's not as though Dr. Shannon single-handedly invented cryptography. Or have you never heard of rot13? It's the canonical example of what's known as a Caesar cipher (yes, that Caesar - as in Emperor Julius Caesar, of ancient Rome). People have been trying to hide information for centuries, and as technology and knowledge has expanded, so have methods for information hiding and information discovery.


  • The work of Turing and Church proved that there does not exist a general procedure to determine if an arbitrary piece of software terminates (and by extension, that many other properties are similarly undecidable). Did you ever study computer science?

    They did NOT show that one cannot "prove" software. In addition, the notion of "proving" software is nonsense unless you say what you are proving about the software. Can I prove that every well-formed C program has a main? Of course, it's in the definition. Can I prove that an arbitrary well-typed Standard ML program has a well defined evaluation sequence (that it is "safe")? Yes. Can I prove that an arbitrary program in the simply typed lambda calculus always terminates? Yes. There are many extremely useful properties of programs that actually can be proved mechanically.

    Furthermore, just because a procedure can't do it in general doesn't mean that a human can't prove specfic programs, perhaps with the help of a computer. Look at CAR Hoare's logic for proving iterative program correctness, for instance.

  • Because it links information transmission and retrieval to the noise, which is a result of temperature, you can ultimately prove that a material link is always needed to transmit information. Transmission of information without a material channel, i.e. by "spiritual" means, would violate the second law of thermodynamics. The outcome: we would perceive time as a bi-directional dimension. There are some "Maxwell's daemons" whose impossibility can only be proved by the use of information theory.
  • Further, his work on information entropy is fundamental. Entropy is like a dual to the Gaussian (normal, bell-curved) distribution, which I hope everyone has at least heard of. Entropy measures are used in many machine learning algorithms, which makes them useful in AI, data mining, etc.

    --
    Marc A. Lepage (aka SEGV)
  • He once almost beat top Soviet player Botvinnik (IIRC; it might have been another famous Soviet); he was up the exchange with a win in sight, but then made some silly blunder. The interview I read this from was an old OMNI, with Shannon & his wife.

    Of course, I get the impression that at that point, Botvinnik used tricky smoke & mirrors tactics against Shannon, but I still imagine that Shannon was honest enough that he wasn't embellishing his advantage so much.
  • This mention that people should be judged by their "social way", whatever that may be, always pop up when scientists are mentioned. Also, obituaries in slashdot always have a couple of "we should not be sad" trolls. Funny once, man, not funny twice.
  • What did Shannon do for humanity?
    Shannon worked on crypto during WWII from the American side. He spent time with Alan Turing exchanging ideas. Just from that he made a significant contribution to the world.

    He was a great scientist and an interesting man, let people be sad about the loss.
  • I must put together a brief crypto FAQ for slashdotters one day. Summary: QC is interesting but way overhyped.
    --
  • >Come on, he's just a computer figure. Being a computer engineering student myself,
    >and holding several figures in the computer world in very high regard,
    >I have to say I will feel sad for the death of people who have advanced
    >humanity in a more social way, perhaps in charity or peace work

    So you're saying his death shouldn't be a sad event because "he's just a computer figure". Maybe you're just jealous that your own death won't even be 1% as sad as his.

    No. The trickle down effects aren't BS. Were it not for his theories the Nazis might have won the WWII. Figure that.
  • No. The trickle down effects aren't BS. Were it not for his theories the Nazis might have won the WWII. Figure that.

    Wow... Talking about far-fetched...

  • As I understand it (though I could be off on this), the patent office doesn't strictly require that a device or process do what it is claimed to do to give you an exclusive on it. They stopped requiring that long ago - about the time they stopped requiring you to deliver a working model (for lack of storage space).

    If you want to pay a bunch of money and effort to get a 17-year exclusive on building something new that doesn't work, fine with them.

    So a patent is not an endorsement by the US government of your scientific or engineering claims.

    An exception is perpetual motion machines. There were SO many applications wasting SO much of their time that they still require you do deliver a working model. THAT put a big spike in the fad, though people still try.

    One fellow had trouble patenting a very efficient still. It was very tall to use gravity to create a decent vacuum at the top (like the original water version of a mercury barometer), and acted as a counter-current heat exchanger, scavenging heat from the condensate and brine to preheat the raw water feed. You still had to add the heat of solution and some more to deal with losses, but that's a LOT less than boiling off the whole output stream at local atmospheric pressure.

    They initially refused his application, thinking it was a perpetual motion machine, and he had to argue long and hard to get the patent.
  • The work of Turing and Church proved that there does not exist a general procedure to determine if an arbitrary piece of software terminates (and by extension, that many other properties are similarly undecidable). Did you ever study computer science?

    Kee-rect, and "yes."

    They did NOT show that one cannot "prove" software.

    Actually, what they showed is that software exists that cannot be proven. The fact that software exists which can be proven is irrelevant if the software you need isn't in that set.

    In addition, the notion of "proving" software is nonsense unless you say what you are proving about the software.

    Well, the original post that started this mentioned an operating system, so I'd say the notion of "proof" being bandied about here involves not crashing and behaving predictably. Ultimately IMHO it is an exact analog of Turing's stopping problem, which is why I brought it up. Many algorithms cannot be predicted in any sense by anything simpler than themselves. The only way to "prove" them is to run them and observe their behavior. Of course this proof is inductive rather than deductive; but Turing's side observation was that this unprovability is a characteristic of living things, and the fact that a deterministic system could exhibit such behavior meant it might be possible to emulate life.

    As hardware becomes faster and cheaper and software more complex and abstract, it will be more unprovable. There is a reason computers crash more nowadays than they did Back When.

    There may be

    many extremely useful properties of programs that actually can be proved mechanically

    but is this class growing? I'd say not. Back in the 80's the DOD ran a project to build a fully proven multitasking CPU core; even with all their resources they failed. The performance issue was too much even for them to ignore.

    Today when we talk about "useful" software we are often talking about threaded, interrupt driven stuff riding on top of a very highly unproven operating system. Do you think that one day you will be able to buy a "fully proven" equiv for your PC of choice, and find "fully proven" software to run on it? Of course not. The economics and interest are not there becuase the reality is that proving software is expensive and futile.

    In Turing's day, it was commonly thought that any complex system could be emulated, modeled, and predicted. Turing, Church, Godel, and Mandelbrot have all contributed to putting that idea 6 feet under. People who do real work with real tools in the real world now know that many real applications simply cannot be treated this way. People who think otherwise are living in fantasyland.

  • dude how is it off topic? one article said 84, and the other said 85. i mean, that shouldn't be too hard to relay. apply some thought next time.
  • by Ungrounded Lightning ( 62228 ) on Tuesday February 27, 2001 @09:45AM (#399245) Journal
    ... it is possible to prove some software, but this is not true of most software on a useful scale.

    Actually I claim that this isn't true. It is possible to prove that two very different expressions of a problem are EQUIVALENT, but that's a very different thing from proving either of them is the CORRECT solution for a given problem.

    This is not as bad as it sounds: The most effective way known to notice bugs is to express the problem in two very different ways and compare them. Primary example: The spec versus the code. (That's why the spec is IMPORTANT. "Self-documenting code" is a myth. When you go to verify it and the only spec is the code itself, the only thing you can check is that the compiler works right.)

    Strongly OO languages like C++ which dereference everything eat processor cycles. It may not be apparent why this is a problem until you try to implement a 6-level interrupt system on an 80186 (as one company I know of did)

    For most OO languages this is true. But for C++ it is not. You can get away without the dereferencing if you need to. (You COULD even fall back into a style that amounts to compiling essentially ANSI C via a C++ compiler.)

    But to get away without the dereferences you also have to abandon the things that need them - like polymorphism. This doesn't mean you have to abandon all the features. (For instance, you can define a class hierarchy so you don't get polymorphc support unless you need it, and specify that you don't use it on particular function calls - at the risk of violating encapsulation checking. (Overloaded operators are very handy and often don't need to dereference anything.) But you're starting to get into the "why bother" area.

    The big problem is that to do this efficiently you need somebody who:
    A) Knows how to program well in C, including how to get efficient compiler output in the tight spots.
    B) Knows how to program well in OOP style.
    C) Knows HOW the C++ compiler achieves its effects, so he knows how to get it to do things efficiently.

    And now you're talking about a VERY rare and expensive person. A is years of training. Adding B to someone who has A is typically 6 months minimum of learning curve just to start being productive again. (Starting with B makes it hard to get really good at A, while starting with A puts up mental roadblocks when you're relearning how to do things B style.) Most procedural programmers who learn OOP stop there.

    But adding C is much like repeating the learning techniques of the pioneering days of computer education. Everybody learned assembly and how pre-optimizer compilers worked, because they were expected to WRITE their own compler some day. In the process they learned how to write source code that would trick a non-optimizing compiler into putting out good assembly. Technique C consists of doing much the same thing to a C++ compiler to get the equivalent of tight C out of it.

    So if you happen to have a small team of people who already HAVE all this knowlege, they might make a C++ interrupt handler that is as efficient, or nearly so, as one coded in C or even assembler. But why not just do it in C, assembler, or some hybird in the first place? Then people with only skill set A can understand it and work on it. B-)
  • by Fjord ( 99230 ) on Tuesday February 27, 2001 @09:47AM (#399246) Homepage Journal
    The real problem is that in order to test if a program does what it is supposed to do, you must express what the program is supposed to do. A correct program is an isomorphism to the expression of what it is supposed to do. You can therefore look at it two ways: to express the requirements of the program is as much work as writing the program, or (a better way of looking at it) writing the program is how you express what the program is supposed to do.
  • How about this answer: Every subset of the sequence is independent of every other disjoint subset of the sequence. (Or, knowing any particular subset of the sequence gives you no information about the rest of the sequence, whether from the past, the future, or both).


    Excellent. And what can we conclude about the length of a program that is capable of producing a seqence of a given length n?
  • What Shannon did for science helped convince me to stick with a lot of my theoretical work. He was a truly inspiring and committed person and his death greatly saddens me.

    Thank you Claude Shannon for all you did.

  • I saw this, and was reminded of a Knuth quote (I believe from TAOCP):

    Beware of bugs in the above code; I have only proved it correct, not tried it.

    I think that says it all. ;-)
  • Yes, it has been a sad week. First, the cricket world lost "The Don" [cricket.org], and now computer science has lost Claude E. Shannon. Both were greats in their respective fields and will be missed.
    --
  • You follow a link to Shannon's 'A Mathematical Theory of Communication' posted previously in this article you will see that on the front page Shannon writes:

    The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting unit may be called binary digits, or more briefly
    bits, a word suggested by J. W. Tukey.
    Dunno who the hell he was tho'...

    .flip.

  • (alongside the original Dr. Strangelove himself, the truly sinister Edward Teller).
    Not necessarily true; Wernher von Braun, and Herman Kahn of the Rand Corporation, are the more likely inspirations for the character. It's particularly unfair to compare Strangelove (a German and ex-Nazi) to Teller, who is Hungarian, and who fled the Nazi invasion of his homeland.

    More on this subject here [visual-memory.co.uk].

  • I told the sad news to my Info/Comm engineering prof this morning. He's been working in communications for about three decades now himself, and was saddened to hear it.

    He said that Shannon had showed up at a conference a just a few years ago. Word quickly spread around the conference floor that Shannon was there. Finally someone was brave enough to break the ice and talk to him, and instantly a whole crowd was surrounding Dr. Shannon, just to meet him or say hi. "Like a rock star," my professor said.

    Claude Shannon may well be the most influential mathematician no one has heard of.

  • i first came across shannon in a basic semiconductor class when designing my first digital circuit. then i came across him in a bioinformatics class when talking about information entropy in genomic DNAs. and then again while reading recreational computer books, and then in statistical field theory, and then again... and again... i always thought he was one of the giants of modern science, whose work, though seemingly confined to the field of information theory, actually reached across disciplines and influenced them all in subtle but notable ways. i pay tribute here.
  • The man was one of the real guiding geniuses of the whole bloody century. My analogy is a gent named Enders. Nobody outside medicine and biology has heard of him. He showed how to grow the polio virus; without that, no vaccines.
  • Geez, bummer.

    Since I work in telecom, Shannon's Law is close to my work, and I'm saddned by his death.

    -RSH

  • J.W. Cooley and J.W.Tukey and the Fast Fourier Transform, ring a bell yet ?

  • How is entropy the dual to the Gaussian distribution? Entropy is defined for any probability distribution, not just Gaussian.
  • Given that this is an article about the death of a famous mathmetician, I find it ironic that the headline reads "Shannon dead at 85" but the linked article reads "Shannon dead at 84" We could at least get the number of years in his age correct!!!!
  • the math/c.s./engineering professor
    unicycle juggling tradition migrated westward
    where cal berkeley's elwyn r. berlekamp
    was doing four hatchets riding a unicycle
    in cory hall.

    berlekamp co-authored with shannon
    as well as with ron graham of bell labs,
    an inveterate 7-ball artiste also with
    low erdos/shannon number.

    to connect these triple-threat propeller-heads
    back to open source, it is of minor note that
    e.r.b. was unix co-inventor ken thompson's
    master's project adviser.

    further background at http://www.math.berkeley.edu/~berlek/
  • There may be "many extremely useful properties of programs that actually can be proved mechanically " but is this class growing? I'd say not. Back in the 80's the DOD ran a project to build a fully proven multitasking CPU core; even with all their resources they failed. The performance issue was too much even for them to ignore.

    I worked on part of that project in the early 80's (the Euclid provable programming language). One of the first things we discovered was that proving algorithmic code was fairly straightforward within a function because semantics of while, if-then-else etc. are fairly well understood. Value semantics was harder (that assignments between variables preserved invariants). But it was really difficult to prove call/return semantics, especially if the there was call-by-reference. One had to assume that the subroutine preserved value semantics and that any routines it called did the same down to machine level instructions. Doing that for every API element meant that the complexity of proofs was overwhelming. And this was in a programming language designed to aid in program proofs. Imagine doing it in C++.

    But not having call-by-reference was the killer. It is rather disastrous for an OS implementation to never have pointers but have to copy the whole data structure to every subroutine:-).

  • You must be ignorant of: strongly normal evaluation strategies such as lazy evaluation, pure FP, monadic programming, simplicity, and all of the other things that standard Haskell has over standard LISP (yeah, no such thing, but the commonly used varients of LISP).

    Haskell is a purely functional programming language, while LISP is not. Haskell code can be far more terse than LISP (because of pattern matching and algebraic data types and lambda notation), so your Pascal to C analogy fails. Haskell is without a doubt, superior to LISP, in the area of functional programming.
  • Ugggggghhhhh - HORRIBLE web site. I use IE5.5 (I need Windows for dev work) and it has some sort of Javascript to maximise the window, full screen, no taskbars control buttons etc. - you either have to wait for the site to load or kill the browser. Yuk.

"Gotcha, you snot-necked weenies!" -- Post Bros. Comics

Working...