Claude E. Shannon Dead at 85 93
Multics writes "It is with considerable sadness I report the death of the parent of Information Theory -- Claude Shannon. Here is his obituary over at Bell Labs. He was 85."
"Gotcha, you snot-necked weenies!" -- Post Bros. Comics
Hmm (Score:2)
--
Shannon not so smart... (Score:3)
For the mathematically minded... (Score:4)
Talking of patents... (Score:1)
Geek Traditions (Score:3)
And we wonder where people get the ideas to do things like this.
Obviously this sort of thing has a long standing tradition.
Changing an "art" into a science (Score:2)
One quote from the obituary interested me:
"Another example is Shannon's 1949 paper entitled Communication Theory of Secrecy Systems. This work is now generally credited with transforming cryptography from an art to a science."
By providing a firm mathematical basis Shannon managed to turn cryptography from a messy discipline without any real means of proof into a real science where mathematical methods could be used to ensure the security of cryptographic systems. Today, we wouldn't dream of thinking of cryptography as an art, it is well-established as a branch of applied mathematics.
But despite similar techniques being available for programming, coders still stick to their "tried and tested" methods of programming which owe less to science and more to guesswork. Rather than using formal techniques to ensure that code is 100% correct, they would rather just sit down and knock something out that sort of works. And this has led to the proliferation of buggy and insecure software we see today.
Why is it that programmers feel that they are somehow above the need for these techniques? It seems to me to be pretty arrogant to avoid using tools that would make their code better, almost as if they would rather not have perfect code. They would sneer at a cryptographic system that hadn't been rigourously proven, but be happy with any old hack that another coder knocks out whilst eating cold pizza and drinking Mountain Dew.
Why is there such a reluctance to produce properly proven code, when they demand it for other algorithmic systems?
Re:Changing an "art" into a science (Score:2)
Crytography is based on Mathematics so it only makes sense that it is based on mathematical methods
Human logic and trains of thought can't be equated into mathematic equations.
Do you even program at all?
NB. When you get round to writing the formula that will knock out an OS in perfect time, first off with no bugs, I will send you a check for everything I own. Until then shove your head up your troll ass
Chess (Score:4)
The IBM Deep Blue site mentions him ...3 .3a.html [ibm.com]
http://www.research.ibm.com/deepblue/meet/html/d.
Re:Changing an "art" into a science (Score:2)
I agree! Why do stupid humans try to program the computer? Furthermore, I also think hardware engineers should give up - after all, all this hardware could be designed perfectly and automatically by machines.
Er, wait a minute...
Re:Shannon not so smart... (Score:2)
NB - Related Pages here [gailly.net] and here [gailly.net]
Well, the major difficulty I see here is that they will have subtle problems trying to making it commercially viable, since they are robbing peter to pay paul, so to speak. They would never really get off the ground. They would have major hassles licensing it to a major player, for example.
Moral/legal considerations aside.
Re:For the mathematically minded... (Score:1)
Claude (Score:2)
Re:Changing an "art" into a science (Score:1)
--
You cannot "prove" software (Score:4)
Because these techniques don't exist. This was proven by Alan Turing in his paper Computable Numbers. The only way to "prove" a piece of software is to run it.
Oh, you're probably thinking of things like OO and top-down and all those gimmicks they teach in CS courses. Well, sometimes those techniques help you write better code (at a cost) and sometimes they make you write worse code (because of the costs).
Top-down, for example, makes sense when you have 100 programmers doing the software for a bank. It makes no sense at all and results in inferior code and user interfaces when you have 1 or 2 guys writing code for a PLC.
A line pops into my head from Greg Bear's book Eon. A character from a civilization 1,200 years advanced beyond ours is explaining to a contemporary person why they still have conflict. To paraphrase, there are limited resources and even very wise and educated people will inevitably come to have different ideas about how to solve problems. Each will want to use the limited resources in his way, because he believes it is the best way. Thus there is conflict.
This is why the language, OS, and technique wars will always be with us. What works in a palm does not work on a desktop, neither works in a PLC, and none of those things works for an enterprise server. And human beings are still subtle and complex enough compared to machines that attacking a problem of the right scale as an "art" will produce superior results.
hall of famer? (Score:1)
Old scientists never die... (Score:5)
Crypto isn't that way at all! (Score:2)
For the most part, mathematical methods do not *ensure* the security of our systems. We have no proof that, say, AES is secure; all we know is that the world's greatest cryptanalysts have attacked it as hard as they can, and come to the conclusion that there won't be a practical break for a long time to come. In theory, it could be heinously broken tomorrow. We don't have a proof that it won't be, just decades of experience in attacking cryptosystems and finding out what makes them weak in practice.
Cipher design is a mixture of a science and an art. There's certainly artistry in the design of AES!
--
Someone noticed (Score:1)
Does this mean that research into perpetual motion would be a way of cheating death for the scientists involved?
Re:Shannon not so smart... (Score:2)
Re:Changing an "art" into a science (Score:1)
You dont program, do you.
Programming is usually an iterative process - write a bit, test it, change the spec a little, write some more etc etc. This is before the user gets to see it. Then it goes through another set of the above.
Re:hall of famer? (Score:2)
http://www.computerhalloffame.org [computerhalloffame.org].
Beware! It's very flash-intensive.
---
(L) to the (I) to the (S) to the (P) (Score:1)
Re:You cannot "prove" software (Score:2)
There's a huge difference. For many problems you can build a solution that is provably right. On the other hand, given a "random" program you can't generally prove it; ie. there is no systematic method to "prove" software, but there are methods to turn some provable methods and descriptions into programs that keep the same properties. That's what formal languages are about.
Re:For the mathematically minded... (Score:4)
Overstatement of the Year award (Score:3)
While Shannon's contribution to crypto was large (like his contributions in other fields), this is an overstatement which greatly distort's Shannon's own motivations.
Shannon's article was the capstone on a body of work which was mostly created during World War II by people like Alan Turing and John von Neumann, as well as dozens of equally important but less famous people who worked at places like Bletchey Park.
If you want a "sinister figure" why not pick on John von Neumann, who instead of riding his unicycle in the corridors was working hard to make the H-bomb a reality (alongside the original Dr. Strangelove himself, the truly sinister Edward Teller).
LISP is an acronym... (Score:2)
Re:For the mathematically minded... (Score:1)
Actually, you can (Score:1)
He was only 84, actually (Score:2)
Don Bradman (Score:1)
Story [sify.com]
New York Times Obit (Score:2)
There is a word for "provable" software... (Score:3)
Formal methods allow you to reduce larger problems to "trivial" status than you can hold in your head, or to distribute them across multiple programmers without affecting their triviality. When the scale of the problem and the resources available to solve it are of the right size, this makes such methods appropriate.
Formal methods have two major costs. The first is that you can't easily back up and punt; when you realize half-way through that the formal design which looked good on paper is eating resources or presents a poor user interface in practice, it is very hard to go back and improve things. The second is that they increase resource requirements across the board, because you cannot apply creative solutions which become apparent midstream to streamline the project.
Sometimes the need for a formal method is due to the use of another formal method. For example, if you use top-down (appropriate for a large project) you are forced into all-up testing, which means you need strongly typed languages, etc. or you will be debugging forever. You can program bottom-up and prove each module as a trivial exercise as you build to higher and higher complexity, but your final project (while efficient and bug-free) may not resemble the formal spec much. This may not be a problem for a game; it may be a big problem for a bank.
Strongly OO languages like C++ which dereference everything eat processor cycles. It may not be apparent why this is a problem until you try to implement a 6-level interrupt system on an 80186 (as one company I know of did).
So yes, it is possible to prove some software, but this is not true of most software on a useful scale. When dealing with interrupts or a network or multithreaded or multiuser anything, there are so many sources of potential error that the formal system will bog down. You simply cannot check everything. You end up with either your intuition or no project.
Turing was interested in the problem of duplicating human intelligence. Thus, he aimed his theories at a high level of general abstraction, an "umbrella" which would cover his goal no matter what its eventual form might turn out to be. One result of this is that Computable Numbers is sometimes cited to support the concept of free will. Another is, as computers get more and more complex, Turing's take on the situation becomes more accurate and useful. His point was that even a fully deterministic machine can be beyond deterministic prediction. It's a shame he didn't live to see his vindication by Mandelbrot.
Re:He was only 84, actually (Score:1)
--Multics
Re:You not so smart... (Score:1)
Or one petabyte by one bit, on average, for that matter.
Re:Changing an "art" into a science (Score:1)
> that code is 100% correct, they would rather
> just sit down and knock something out that sort
> of works. And this has led to the proliferation
> of buggy and insecure software...
...and multibillion-dollar corporations. I wish I had knocked out some buggy, insecure code that made me a billionaire in an overnight IPO.
Re:Changing an "art" into a science (Score:2)
Correctness proofs were really big when I was starting to get seriously interested in comp sci in the early 80s. Back then one of the motivating factors (at least in funding) was the perceived need to design systems of unprecedented size and complexity. There was considerable talk about space based ballistic missile defense systems. People realized early on was going to rely on software systems that had to be more complex and better performing than anything we'd ever built yet.
I have limited faith in the ability of proofs to show that entire systems such as SDI will work, since the proofs themselves are themselves likely to be incomprehensibly bulky and the requirements documents unreliable guides to real world needs. I do think proof has some merit in improving the quality of system components. But in realistic development projects you put the effort where the greatest benefit is derived, for example in better understanding user requirements and input characteristics. I'd guess that energetic maintainability reviews would catch the vast majority of mistakes that mathematical proof would. True, this is reliance on art -- but mathematics itself is an art.
My personal take home lesson as a programmer from the mathematical proof movement was to look at control flow within a function differently. Instead of looking at a section of a function, such as a loop, in minute detail, I tend to think now in terms of how the situation is changed when the loop exits from when the loop was first entered. I view code in larger chunks than individual statements. In a way, think about sections of a function they way I'd analyze a recursive function. This is not at all the way people are taught to program, which is to parallel the operation of the computer in their minds.
Getting back on topic, I was surprised to see this article on Slashdot, because even though I've been programming for a long time, I had supposed Shannon to be long departed because of his incredible papers of the 40s -- I tend to put him up with Bohr and Turing and other giants of early 20th century science. I don't work with communications theory, so I my work doesn't explicitly and technically rely upon his. But over the years some of Shannon's insights have been like spiritual guides for me, for example the need for a common frame of reference when communicating with somebody. The Shannon coding theorem taught me about the marginal effort required for perfection vs. practically sufficient performance.
Which, in a way, brings me back to my attitude on the off topic topic of correctness proofs.
Re:You cannot "prove" software (Score:2)
Problem is, in general for any software complex enough to be useful, proving it correct is just as error prone as writing code (if it is even possible at all). Maybe moreso - you have to prove that the code matches the mathematical model, then demonstrate correctness within the model, so there are two very distinct areas for error.
To ensure that the proof is correct, you have to use many of the same techniques that you could just apply to the code.
Tom Swiss | the infamous tms | http://www.infamous.net/
Re:You not so smart... (Score:2)
True, but irrelevant. Intelligence has limited practical application. Sometimes you have to be ignorant of what cannot be done so that you can accomplish the impossible. I don't know much, but everything I do know I learned from the reading motivational posters put up by my employer.
How do you compress a file of size 1 byte by 1 byte
Ah, that is part of my proprietary, patent pending method. Did I mention that my algorithm is not guratneed to halt? (Note that by the law of the excluded middle, anything that cannot be proved false must be true.)
By the way grasshopper -- a sequence doesn't have to be one bit long to be irreducible (by means other than my patent pending non halting algorithm of infinite size of course). Consider this riddle -- what is the definition of a random sequence?
Re:Geek Traditions (Score:1)
I learned to rollerblade between the (softly-padded) cube walls of Lucent
8-)
Re:hall of famer? (Score:1)
Re:You cannot "prove" software (Score:1)
You then run into the problem of proving the specification is correct...
Re:He was only 84, actually (Score:1)
--
Re:hall of famer? (Score:1)
Re:So what? (Score:2)
Shannon's work provided the groundwork for programs you use all the time, whether you know it or not!
Data compression
Huffman's compression algorithm reads like an appositive to Shannon's work.
Error Correction
Like your data to arrive in one piece? No corruption?? Thank Shannon for providing the groundwork for efficient insights to error correction.
Encryption
Without Shannon's take on information theory, encryption would be the same hunt-and-peck game that was played during WW2.
Have a little respect for something you couldn't do.
Re:You cannot "prove" software (Score:1)
Not. The idea is to translate automatically from model to code using robust, time-proven translators (in the same way that one can more or less trust a non-RedHat gcc-O2 to translate C code correctly, at least on an x86; or in the way you trust the CPU to accurately execute the machine-language instructions you feed it). Interesting properties are proven on the model.
Perhaps you thought of, say, UML when you read "formal methods"? That's where the disagreement comes from. UML is not a proving tool, it's a design helper, roughly as provable as C (that is, in the real world, not provable at all.)
Automatic translation of proven models is used in embedded software for critical stuff, e.g. antilock brakes, in protocol design and implementation, and such stuff.
Another obit says died at 84 (Score:2)
<p>I am really sorry that I never go to meet him, he is certainly one my heros. His picture and Richard Feynman's are above my desk. I hope he makes it onto a postage stamp some day. I don't think a lot of CS people realize that without Shannon, they wouldn't know about Boolean algebra (well this is a bit of an exageration), but he was the first one to really make the link between boolean algebra and it's use in circuits and then computers. He was actually in the MIT electrical engineering department, I believe. Even though he was really a mathematician.</p>
Re:There is a word for "provable" software... (Score:1)
Indeed ! Proving user interfaces is unreasonable in the first place (what exactly are you proving then?)
Re:You cannot "prove" software (Score:2)
Sounds like you're talking about code generation, not verification. That's a different kettle of fish - essentially, you're picking a programming language that better matches your problem domain, so that the code becomes tractable enough to prove. A neat idea, but it requires that a model/language and code generator that are useful for your problem domain already exist.
No, I was thinking of the proofs of "while" loops I sweated over back in school. B-)
Tom Swiss | the infamous tms | http://www.infamous.net/
pregnant ideas or smart men? (Score:2)
us now, but not so in the 1930s when a binary
switching tube cost dollars instead of micro-cents
today.
Was digital a "pregnant idea" about to be discovered
by nyone half-way smart,
or would it take a really smart person to accelerate the discovery?
Re:LISP is an acronym... (Score:3)
Check it out at the Jargon File:LISP [tuxedo.org]
Somewhat more on topic, Shannon is mentioned directly in the Jargon file, where we find that [tuxedo.org]
Claude Shannon first used the word "bit" in print in the computer science sense.
Also, I just read the book "Crypto" by Steven Levy and was surprised to find out how critical Shannon's work on information theory was to early non-government crypto research. If it wasn't for his papers, it is likely that the US government would still have the only really strong cryptography in the world.
He will be remembered.
Torrey Hoffman (Azog)
New York Times obit ... (Score:2)
Re:There is a word for "provable" software... (Score:1)
Remember that Turing's result works only for deterministic machines with infinite memory. In practice the halting problem is computable. Of course it doesn't mean it's practical to do so.
Re:You cannot "prove" software OT (Score:1)
A character from a civilization 1,200 years advanced beyond ours is explaining to a contemporary person why they still have conflict. To paraphrase...
... it's because Greg Bear is a tedious hack who can't think of anything better. Eon has to be the worst SF book of the 80's. I did have some fun reading the worst bits out to friends but that was it.
TWW
Re:You cannot "prove" software (Score:1)
Did that too.. worthless as a tool but a nice introduction to more interesting stuff; it's a bit like the ubiquitous toy compiler project in CS studies: everyone does that someday and everyone buries the result and uses gcc instead past the deadline. :)
Re:For the mathematically minded... (Score:2)
And it's actually readable if you have a really solid math background. (This isn't intended to be a vacuous comment: a lot of mathematical papers are only readable by specialists and not even by other mathematicians working in other areas.)
IMO the really amazing gem in this paper is the noisy coding theorem. Shannon proved that if you have a communication channel that introduces errors (i.e. a noisy channel) then that channel always has a channel capacity. If you send data at a rate lower than the channel capacity then you can prove that there are coding schemes that result in a average probability of a bit error being as small as you like. (However smaller error rates mean more complicated codes). Unfortunately Shannon's proof was nonconstructive. He essentially showed that if you transmit the data in large blocks and encode those blocks randomly you get a probability of a bit error that goes to zero as the block size increases. These codes do not require ever increasing redundancy to achieve this.
Unfortunately decoding a random encoding applied to a huge block of data is not computationally practical. A major thread in information and coding theory has been trying to find tractable codes that behave somewhat like Shannon's codes. It's a hard problem, but your modem works better because of it.
Personally I like this result because it was so counter-intuitive: who else would have thought it might be possible, even theoretically, to construct codes that make the error rate approach zero without introducing ever more redundancy into the code?
Shannon was a giant in engineering, computer science and mathematics.
Shannon and juggling (Score:2)
film of some of his juggling simulator machines -- very clever. Some of my juggler/computer jock friends were astonished that he was still alive and kicking back then, having only seen his seminal work from relatively early in computer history.
Re:Hmm (Score:1)
People should be judged on their positive accomplishments alone.
Re:Changing an "art" into a science (Score:2)
This will probably upset people, but it is more fact than opinion: LISP was and is still a hack. That is why ML and Haskell were made. LISP is not the end-all-be-all functional programming language. It has flaws.
While I see the historical value of LISP, I do believe that it has become obsolete and convoluted over the years. Any sophisticated programmer will see this within a week of using one of the latest and greatest FP languages, such as Haskell or Clean.
Re:You cannot "prove" software (Score:1)
Ok, moderators, if you don't understand something, DO NOT MOD THE POST!!!
We should be grateful he lived when he did... (Score:2)
These activities bear out Shannon's claim that he was more motivated by curiosity than usefulness. In his words "I just wondered how things were put together."
Nowdays his curiosity could easily land him in trouble. Figuring out how things are put together seems to be a big no-no in the US today. Gee, you might discover all sorts of things big companies don't want you to know - like how crappy is their encyption, how stupid their blocking software, or how lame their security.
You can get a patent for a Perpetual Motion Machin (Score:1)
The rule is that they will not issue a patent for a perpetual motion machine without a working model .
So everyone get out their hammer and saw and start building a machine that runs forever. You need more than just an "idea"... You actually have to be able to produce one and show it to them.
They shouldn't grant a patent that gaurantees compression by at least 1 bit without a working program which should be run on itself once for each bit it has so that they can prove to us that every piece of data in the world can be reduced to 1 very random bit.
Re:A Sinister Legacy (Score:1)
[further rantings munched]
(I know this is a troll, but denigrating the memory of a well-respected scientist is cowardly and inexcusable.)
Your post is unadulterated nonsense. Americans have had a paranoid culture all the way back to before the Founding. Like it or not, a lot of our core national makeup stems from Puritan attitudes. One of those attitudes was paranoia (Salem witch trials, anyone?).
Your assertion that people were open and honest before the 20th century is ludicrous on the face of it. History is chock full of betrayal, double-dealing, and all the other fun stuff that wackos and trolls such as yourself claim sets us apart from the march of ages. Besides, it's not as though Dr. Shannon single-handedly invented cryptography. Or have you never heard of rot13? It's the canonical example of what's known as a Caesar cipher (yes, that Caesar - as in Emperor Julius Caesar, of ancient Rome). People have been trying to hide information for centuries, and as technology and knowledge has expanded, so have methods for information hiding and information discovery.
You are so absolutely wrong...! (Score:2)
The work of Turing and Church proved that there does not exist a general procedure to determine if an arbitrary piece of software terminates (and by extension, that many other properties are similarly undecidable). Did you ever study computer science?
They did NOT show that one cannot "prove" software. In addition, the notion of "proving" software is nonsense unless you say what you are proving about the software. Can I prove that every well-formed C program has a main? Of course, it's in the definition. Can I prove that an arbitrary well-typed Standard ML program has a well defined evaluation sequence (that it is "safe")? Yes. Can I prove that an arbitrary program in the simply typed lambda calculus always terminates? Yes. There are many extremely useful properties of programs that actually can be proved mechanically.
Furthermore, just because a procedure can't do it in general doesn't mean that a human can't prove specfic programs, perhaps with the help of a computer. Look at CAR Hoare's logic for proving iterative program correctness, for instance.
There's more to information theory (Score:2)
Re:So what? (Score:1)
--
Marc A. Lepage (aka SEGV)
No kidding... (Score:1)
Of course, I get the impression that at that point, Botvinnik used tricky smoke & mirrors tactics against Shannon, but I still imagine that Shannon was honest enough that he wasn't embellishing his advantage so much.
A sadly redundant troll (Score:1)
Re:What is so sad about his death??? (Score:1)
Shannon worked on crypto during WWII from the American side. He spent time with Alan Turing exchanging ideas. Just from that he made a significant contribution to the world.
He was a great scientist and an interesting man, let people be sad about the loss.
*sigh* no. (Score:2)
--
Re:What is so sad about his death??? (Score:2)
>and holding several figures in the computer world in very high regard,
>I have to say I will feel sad for the death of people who have advanced
>humanity in a more social way, perhaps in charity or peace work
So you're saying his death shouldn't be a sad event because "he's just a computer figure". Maybe you're just jealous that your own death won't even be 1% as sad as his.
No. The trickle down effects aren't BS. Were it not for his theories the Nazis might have won the WWII. Figure that.
Re:What is so sad about his death??? (Score:1)
Wow... Talking about far-fetched...
Doesn't REALLY have to work to be patentted (Score:2)
If you want to pay a bunch of money and effort to get a 17-year exclusive on building something new that doesn't work, fine with them.
So a patent is not an endorsement by the US government of your scientific or engineering claims.
An exception is perpetual motion machines. There were SO many applications wasting SO much of their time that they still require you do deliver a working model. THAT put a big spike in the fad, though people still try.
One fellow had trouble patenting a very efficient still. It was very tall to use gravity to create a decent vacuum at the top (like the original water version of a mercury barometer), and acted as a counter-current heat exchanger, scavenging heat from the condensate and brine to preheat the raw water feed. You still had to add the heat of solution and some more to deal with losses, but that's a LOT less than boiling off the whole output stream at local atmospheric pressure.
They initially refused his application, thinking it was a perpetual motion machine, and he had to argue long and hard to get the patent.
Re:You are so absolutely wrong...! (Score:2)
Kee-rect, and "yes."
They did NOT show that one cannot "prove" software.
Actually, what they showed is that software exists that cannot be proven. The fact that software exists which can be proven is irrelevant if the software you need isn't in that set.
In addition, the notion of "proving" software is nonsense unless you say what you are proving about the software.
Well, the original post that started this mentioned an operating system, so I'd say the notion of "proof" being bandied about here involves not crashing and behaving predictably. Ultimately IMHO it is an exact analog of Turing's stopping problem, which is why I brought it up. Many algorithms cannot be predicted in any sense by anything simpler than themselves. The only way to "prove" them is to run them and observe their behavior. Of course this proof is inductive rather than deductive; but Turing's side observation was that this unprovability is a characteristic of living things, and the fact that a deterministic system could exhibit such behavior meant it might be possible to emulate life.
As hardware becomes faster and cheaper and software more complex and abstract, it will be more unprovable. There is a reason computers crash more nowadays than they did Back When.
There may be
many extremely useful properties of programs that actually can be proved mechanically
but is this class growing? I'd say not. Back in the 80's the DOD ran a project to build a fully proven multitasking CPU core; even with all their resources they failed. The performance issue was too much even for them to ignore.
Today when we talk about "useful" software we are often talking about threaded, interrupt driven stuff riding on top of a very highly unproven operating system. Do you think that one day you will be able to buy a "fully proven" equiv for your PC of choice, and find "fully proven" software to run on it? Of course not. The economics and interest are not there becuase the reality is that proving software is expensive and futile.
In Turing's day, it was commonly thought that any complex system could be emulated, modeled, and predicted. Turing, Church, Godel, and Mandelbrot have all contributed to putting that idea 6 feet under. People who do real work with real tools in the real world now know that many real applications simply cannot be treated this way. People who think otherwise are living in fantasyland.
Re:84 or 85? (Score:1)
Great post but two quibbles: (Score:3)
Actually I claim that this isn't true. It is possible to prove that two very different expressions of a problem are EQUIVALENT, but that's a very different thing from proving either of them is the CORRECT solution for a given problem.
This is not as bad as it sounds: The most effective way known to notice bugs is to express the problem in two very different ways and compare them. Primary example: The spec versus the code. (That's why the spec is IMPORTANT. "Self-documenting code" is a myth. When you go to verify it and the only spec is the code itself, the only thing you can check is that the compiler works right.)
Strongly OO languages like C++ which dereference everything eat processor cycles. It may not be apparent why this is a problem until you try to implement a 6-level interrupt system on an 80186 (as one company I know of did)
For most OO languages this is true. But for C++ it is not. You can get away without the dereferencing if you need to. (You COULD even fall back into a style that amounts to compiling essentially ANSI C via a C++ compiler.)
But to get away without the dereferences you also have to abandon the things that need them - like polymorphism. This doesn't mean you have to abandon all the features. (For instance, you can define a class hierarchy so you don't get polymorphc support unless you need it, and specify that you don't use it on particular function calls - at the risk of violating encapsulation checking. (Overloaded operators are very handy and often don't need to dereference anything.) But you're starting to get into the "why bother" area.
The big problem is that to do this efficiently you need somebody who:
A) Knows how to program well in C, including how to get efficient compiler output in the tight spots.
B) Knows how to program well in OOP style.
C) Knows HOW the C++ compiler achieves its effects, so he knows how to get it to do things efficiently.
And now you're talking about a VERY rare and expensive person. A is years of training. Adding B to someone who has A is typically 6 months minimum of learning curve just to start being productive again. (Starting with B makes it hard to get really good at A, while starting with A puts up mental roadblocks when you're relearning how to do things B style.) Most procedural programmers who learn OOP stop there.
But adding C is much like repeating the learning techniques of the pioneering days of computer education. Everybody learned assembly and how pre-optimizer compilers worked, because they were expected to WRITE their own compler some day. In the process they learned how to write source code that would trick a non-optimizing compiler into putting out good assembly. Technique C consists of doing much the same thing to a C++ compiler to get the equivalent of tight C out of it.
So if you happen to have a small team of people who already HAVE all this knowlege, they might make a C++ interrupt handler that is as efficient, or nearly so, as one coded in C or even assembler. But why not just do it in C, assembler, or some hybird in the first place? Then people with only skill set A can understand it and work on it. B-)
Re:You cannot "prove" software (Score:3)
Re:You not so smart... (Score:2)
Excellent. And what can we conclude about the length of a program that is capable of producing a seqence of a given length n?
many thanks to a great scientist (Score:1)
What Shannon did for science helped convince me to stick with a lot of my theoretical work. He was a truly inspiring and committed person and his death greatly saddens me.
Thank you Claude Shannon for all you did.
Re:Changing an "art" into a science (Score:1)
I saw this, and was reminded of a Knuth quote (I believe from TAOCP):
Beware of bugs in the above code; I have only proved it correct, not tried it.
I think that says it all.Re:Don Bradman (Score:1)
--
Re:LISP is an acronym... (Score:1)
You follow a link to Shannon's 'A Mathematical Theory of Communication' posted previously in this article you will see that on the front page Shannon writes:
Dunno who the hell he was tho'....flip.
Re:Overstatement of the Year award (Score:1)
More on this subject here [visual-memory.co.uk].
A true celebrity (Score:1)
He said that Shannon had showed up at a conference a just a few years ago. Word quickly spread around the conference floor that Shannon was there. Finally someone was brave enough to break the ice and talk to him, and instantly a whole crowd was surrounding Dr. Shannon, just to meet him or say hi. "Like a rock star," my professor said.
Claude Shannon may well be the most influential mathematician no one has heard of.
theorems (Score:1)
Re:Claude (Score:1)
R.I.P. (Score:1)
Since I work in telecom, Shannon's Law is close to my work, and I'm saddned by his death.
-RSH
Cooley & Tukey (Score:1)
J.W. Cooley and J.W.Tukey and the Fast Fourier Transform, ring a bell yet ?
Re:So what? (Score:1)
Correction: He was 84 (Score:1)
Re:Geek Traditions (Score:1)
unicycle juggling tradition migrated westward
where cal berkeley's elwyn r. berlekamp
was doing four hatchets riding a unicycle
in cory hall.
berlekamp co-authored with shannon
as well as with ron graham of bell labs,
an inveterate 7-ball artiste also with
low erdos/shannon number.
to connect these triple-threat propeller-heads
back to open source, it is of minor note that
e.r.b. was unix co-inventor ken thompson's
master's project adviser.
further background at http://www.math.berkeley.edu/~berlek/
Re:You are so absolutely wrong...! (Score:1)
I worked on part of that project in the early 80's (the Euclid provable programming language). One of the first things we discovered was that proving algorithmic code was fairly straightforward within a function because semantics of while, if-then-else etc. are fairly well understood. Value semantics was harder (that assignments between variables preserved invariants). But it was really difficult to prove call/return semantics, especially if the there was call-by-reference. One had to assume that the subroutine preserved value semantics and that any routines it called did the same down to machine level instructions. Doing that for every API element meant that the complexity of proofs was overwhelming. And this was in a programming language designed to aid in program proofs. Imagine doing it in C++.
But not having call-by-reference was the killer. It is rather disastrous for an OS implementation to never have pointers but have to copy the whole data structure to every subroutine:-).
Re:Changing an "art" into a science (Score:1)
Haskell is a purely functional programming language, while LISP is not. Haskell code can be far more terse than LISP (because of pattern matching and algebraic data types and lambda notation), so your Pascal to C analogy fails. Haskell is without a doubt, superior to LISP, in the area of functional programming.
Re:hall of famer? (Score:1)