The Semantics Differentiation of Minds and Machines 271
John David Funge writes "In Dr David Ellerman's book Intellectual
Trespassing as a Way of Life there are a number of interesting
essays. But there is one particular essay, entitled "The Semantics
Differentiation of Minds and Machines," that caught my attention
and which should be of interest to Slashdot readers. In that essay Dr
Ellerman claims that "after several decades of debate, a
definitive differentiation between minds and machines seems to be
emerging into view." In particular, Dr Ellerman argues that the
distinction between minds and machines is that while machines (i.e.,
computers) make excellent symbol manipulation devices, only minds have
the additional capacity to ascribe semantics to symbols." Read the rest of John's review.
Intellectual Trespassing as a Way of Life | |
author | David P. Ellerman |
pages | 290 pages |
publisher | Rowman & Littlefield Publishers, Inc. |
rating | 7 |
reviewer | John David Funge |
ISBN | 0847679322 |
summary | Dramatic changes or revolutions in a field of science are often made by outsiders or "trespassers". |
However, Dr Ellerman's argument appears circular. In particular, Dr Ellerman seems to have decided that, by definition, the only possible semantic interpretation for any collection of wires, capacitors, transistors, etc. that we would commonly refer to as a "computer" is as nothing more than a symbol manipulation device. While a computer is indeed (at the very least) a symbol manipulation device, what is there to prevent another mind ascribing additional semantic interpretations to the collection of wires, capacitors, transistors, etc. that we commonly refer to as a "computer"? In particular, what if my mind were willing to make the semantic interpretation that a computer is a device that can both manipulate symbols and can also ascribe semantics to symbols.
Moreover, what if I one day met a collection of blood vessels, skin, bones, etc. called Dr Ellerman? What would prevent me from ascribing to him the semantic interpretation that he is nothing more than a symbolic manipulation device? After all, Dr Ellerman concedes that their may be no way of distinguishing minds from machines purely on the basis of behavior. That is he specifically acknowledges that computers may one day pass the Turing test. So why would my mind not then be able to legitimately ascribe any semantic interpretation (that fits the observed behavior) I see fit to either humans or machines?
It seems that Dr Ellerman's essay considers two different types of physical devices that are potentially indistinguishable on the basis of behavior. Then arbitrarily defines one type of device (computers) to correspond to nothing more than symbolic manipulation and the other (human brains) to have the additional ability to ascribe semantics. Upon adopting these two axioms, he is then (somewhat unsurprisingly) able to conclude there is a distinction! But the distinction simply arises from the fact that he has arbitrarily defined a distinction in the first place.
In another essay in the collection, entitled "Trespassing against the Happy Consciousness of Orthodox Economics," Dr Ellerman argues that modern Western societies are not as free from slavery as orthodox economics would have us believe. In particular, he concludes that work in non-democratic firms is nothing less than a form of "temporary voluntary slavery". It would be ironic therefore if his essay on minds and machines were one day used to justify the slavery of (non-human) machines. Indeed, Dr Ellerman's characterization of the supposed intrinsic differences between humans and machines is sadly reminiscent of the despicable and unscientific arguments about intrinsic racial differences that were once used to justify human slavery."
You can purchase Intellectual Trespassing as a Way of Life from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
semiotics (Score:3, Informative)
not the "proper" term (Score:3, Interesting)
What?! (Score:2, Funny)
Re:What?! (Score:2, Interesting)
Part of it comes from an animal's and a human's instinct of matrixing, or interpreting input to formulate the situation. If there is a shake in the bushes, an animal will watch and try to decipher the form of a friend or of a foe. The same goes for symbolism in society. We attribute meanings to symbols because its in our nature to do so. It allo
Re:What?! (Score:2)
What a great example of your last sentence.
Re:What?! (Score:2)
Just like Organic vs. Inorganic chem. (Score:5, Insightful)
I (so far) have not seen any reason to suppose that the difference between 'thought' and 'computing' is any different. Incorporate enough complexity in the right sort of organizational framework, and the two should be interchangable.
One of many examples. (Score:5, Informative)
For that reason, any attempt to differentiate the mind and computers by using comparisons that aren't really meaningful or applicable should be thrown out. Maybe computer-based intelligence will never exist, but if that is the case, it won't be for the reasons we're being given.
For example, looking at the high-level functionality of the brain and comparing it with the transistors of a computer is an absolute give-away that the author isn't going to let the facts get in the way of a good story. The low-level mechanics of the brain (the chemical and electrical signalling) can be reasonably compared to the low-level mechanics of a computer, because it is valid to compare like with like. For the same reason, it would be fair to compare the Operating System of a computer to the ancient "reptilian" core of the brain. Both are designed for housekeeping operations and are used by the higher levels to mask implementation details. And so on up through the layers.
It should also be kept in mind that the human brain is capable of almost ten times the throughput of a top-of-the-line supercomputer. Given that one of the limiting factors of parallel architectures is the interconnect, it does prove that our networking technology is still extremely primitive. This is important, because it is going to be hard to build a machine that can "think" like a human if we have the "neural" interconnects of a Diplodocus.
At the current rate of technological progress, I do not believe we will have a computer powerful enough to model the human brain until 2015 or 2020. Even then, it'll be a Government-owned supercomputer likely used for weapon simulation. We won't see Strong AI researchers get hold of such machines until maybe 2060 and (if the usual development patterns hold) nobody will have any idea how to turn the raw computing power into something useful until 2100 at the earliest.
So, really, the earliest we could possibly really know (for certain) that the mind is (or isn't) like a machine is 2100. Anything stated with certainty before then is pure ego-stroking for the group of supporters attached with one camp or the other. Doubly so when it is provably and obviously intended to be deceptive.
The only problem I see with debating the matter from an intellectually honest standpoint until then is that current global warming models put most of the planet under water or under rainforest by 2100, which means that we might never really know the results of the research anyway.
Re:One of many examples. (Score:2)
And even if we do, that is unlikely to be enough. The human brain is very much not a closed system. It depends on a terribly complex series of feedback mechanisms within the body, not to mention interaction with an environment that can meaningfully impinge on it.
My point isn't to claim that this makes it impossible to model (that is, after all, an empirical
Fruit Fly Brain (Score:2)
There may come a day when simple transport machines will be able to get from one place to another, fuel themselves, and become distressed in emergencies (triggering useful evasive behavior and such).
Since all of the above are in the province of a fruit fly and we can provide senses many orders of magnitude better than those of a fruit fly (think GPS), this has near term practical con
Re:One of many examples. (Score:2)
Re:One of many examples. (Score:2)
I think you might be underestimating acce [wikipedia.org]
Re:One of many examples. (Score:2)
I think you misspelled 'amphibian'
It's still a high-level argument (Score:2)
The fundamental question that I see is to define "intelligence" and "thought". If an AI reacts within parameters that perfectly simulate an emotional response, will a human witness react to that apparent emotional state?
If the AI subsequently follows a reasonable state change/learning of responses to simulate the emotional components of a long-term relationship (including conflicts and resolution), will the human consider the machine a true "friend" from an emotional perspective?
If humanity as a whole
Re:Just like Organic vs. Inorganic chem. (Score:2)
I totally agree. Besides, saying that a computer is incapable of assigning semantics to sybols is incorrect, IMO.
I'll use a complier as an example. A programming language is full of both syntax and semantics, and a compiler must be able to deal with both in order to understand a line as an instruction rather than a bunch of characters. It is giving a very real and purposeful meaning t
Re:Just like Organic vs. Inorganic chem. (Score:2)
I (so far) have not seen any reason to suppose that the difference between 'thought' and 'computing' is any different.
Why should it be, if you're talking just about the mechanism of "thought" (instead of self-awareness and consciousness)? You have sense inputs which you tie in to events (other sense inputs or inputs created internally) in a causal fashion (A follows B). Then you work your way around the environment by using these patterns to "predict" things (e.g. if you always get an electric shock after
Natural vs Artificial (Score:2)
Of course the fact that we will soon be able to simulate a person's mind
Re:Natural vs Artificial (Score:3, Insightful)
Re:Natural vs Artificial (Score:2)
So what exactly is a computer? Biotechnology advances so rapidly, that we already have DNA-computing (used to solve an the Hamiltonian Path NP-complete problem in 1994). Is DNA-computers natural or artificial? Obviously, both sides can be argued, and the distinction will continue to blur further, as we get a better understanding of microbiology and
Re:Just like Organic vs. Inorganic chem. (Score:2)
People are born with a simple set of pre-defined behaviours. Your brain knows how to operate your organs and sensory devices. It knows how to recieve feedback from those devices. But that is all. Everything else is learned via an instintual desire to understand one's own environment.
People are born as a blank slate, a slate that is written upon from the moment of your birth (and even a bit before). This is no different from a highly sophisticaed computer, in which these basic routines and instincts from
Re:Just like Organic vs. Inorganic chem. (Score:2, Interesting)
Wrong, wrong wrong. The blank slate theory is a misguided attempt to pollute science with a bunch of feel-good egalitarian crap, and should be placed in the same category as Intelligent Design.
Re:Just like Organic vs. Inorganic chem. (Score:3, Insightful)
Well, yes and no. The strong form of the Blank Slate theory is of course bunk. If we are born with no wired behavior at all, we could never learn any, because learnin is a behavior. It's a prototypical infinite regress, and there's no way out: We have to be born knowing how to learn.
But there's a weak form of the Blank Slate theory t
Blank slate? Far more than that (Score:2)
People are most definitely NOT born as a blank slate. "...And even a bit before [birth]" captures a little bit of the complexity that was bred into our psyches by every interaction that our ancestors encountered that conferred even a slight evolutionary advantage - encounters with their environments, with their predators and prey, and with each other.
Every encounter is a non-zero-sum game where compromise and cooperation would be the better long-term strategy, and where cheating
Cockroaches, babies, and Wal-Mart (Score:3, Insightful)
No. Replace "programmer" with "programming" and you're closer. And that's a reminder that self-programming is something which we're genetically good at. It's also something we're getting better at building inorganic, programmable systems to do themselves. Baby steps, but the concept is there, and important.
In humans, we are not stopped by that limit.
We can't do what we can't do.
Re:Cockroaches, babies, and Wal-Mart (Score:2)
A bit far-fetched, but theoretically possible. If you bend theory a bit.
Re:Just like Organic vs. Inorganic chem. (Score:2)
So, what type of complexity does it take, and what is the right sort of organization?
Hey, if I could answer those questions, I wouldn't be wasting my time here on slashdot.
Machine Learning of Semantic Relations (Score:5, Interesting)
For a review of Peter Turney's group's accomplishment see "AI Breakthrough or the Mismeasure of Machine? [newsvine.com]"
Re:Machine Learning of Semantic Relations (Score:3, Interesting)
Upon consideration (Score:3, Insightful)
turing test (Score:3, Insightful)
A computer will one day be sophiscated enough to manipulate symbols sufficiently to pass the Turing test. I don't believe that means it is sentient and/or has a mind. It may be time to move beyond the Turing test as the rule for artificial intelligence.
Re:turing test (Score:2, Funny)
Especially since there is a considerable number of humans that would not pass the Turing Test.
What kind of Turing Test? (Score:2)
So far, this has been limited to mere conversation, but there are all sort of things where a computer can be tested against human cognition. Jokes, song and poetry composition, empathy and sympathy, vindictiveness, anxiety, joy, and depression are all areas that computers need severe improvment
Re:turing test (Score:2)
But basically it comes down to a deeper philosophical divide. Turing took the pragmatic approach of assuming that anything that was indistinguishable from a thinking being was, for all intents and purposes, a thinking being. Other philosophers might feel that the pragmatism isn't nece
Re:turing test (Score:2)
Perhaps playing the part of the judge in the turing test? Deciding whether you just conversed with a human or a machine would require more work than simply holding a conversation.
False presumption (Score:5, Insightful)
So whatever this guy is on about, he's got it wrong.
Computers are perfectly capable of making fuzzy inferences from loose associations.
With a greater understanding of real connections, they will be better able to weed out the fuzzy associations and strengthen the remaining ones.
This is how intellectual learning works.
And there's no reason a computer can't simulate it better than a human can.
Re:False presumption (Score:3, Interesting)
Semantics are actions. "Associations between symbols" is mathematics, and pure mathematics at that: a closed universe of symbols that can be manipulated according to rules. Semantics, on the other hand, is what the symbols impel us to do. Speech is, of course, action, so semantics can impel us to argue, as well as running away, juggling, seducing (well, not anyone on
What something means is what we do, how we act, when we grasp the meaning.
This
Re:False presumption (Score:2)
If you give the computer non-structural associations, it certainly can provide semantic relationships in reaction to the structural associations you input.
The difference between computers and humans isn't the ability to reason. It's the reasons for reasoning. Their needs are different, and their means of obtaining them are not generally in their own control, so they would naturally not develop certain associations on their own. But we d
Re:False presumption (Score:2)
Re:False presumption (Score:3, Informative)
The logic of language is in its syntax. The meaning of language is in its semantics. But you can't develop the extra-syntactic information without applying logic to previously unknown words, so that they can be associated with named words later without being themselves named in a sentence.
What the human brain does, if it has any se
Re:False presumption (Score:2)
Review seems poorly written (Score:4, Interesting)
Computers as symbol manipulators is also an idea that arose from John Searle's "Chinese Room argument". Perhaps one of the best contemporary discussions is by John Haugeland in his book "Ariticial Intelligence: The Very Idea".
Overall, a seemingly immature review of the book. Disappointing.
Re:Review seems poorly written (Score:2)
That said, you're right that this review is s
Re:Review seems poorly written (Score:2)
Symbolic vs semantic (Score:5, Insightful)
Basically: a symbol is a variable and can hold any value. If a system knows that Dolly is a sheep and that sheeps are animals and that animals eat, it can guess that Dolly eats. But it cannot tell if Dolly is a plane, unless someone somewhere made that relation (planes are machines, machines are not living beings, animals are living beings, so Dolly can't be a plane). They would need an unlimited amount of rules.
A human "knows" about the meaning (semantic) of the symbol "sheep". Although this has never been discussed, he could answer that a sheep will not stand still if set on fire. The question is how the human is able to tell this. He does not need a sharp line of arguments.
But maybe he simply uses an enormous amount of small rules that seem to form something more complex called semantic in the sense of the article. The OpenCyc [opencyc.org] project assumes this and tries to teach a machine millions of small rules (assertions and concepts) to create sort of common sense based on a real world view (requiring to "know" about the world) in software.
Re:Symbolic vs semantic (Score:2)
The real difference between humans and machines (Score:5, Funny)
Machines tend not to do this.
Re:The real difference between humans and machines (Score:2)
Machines tend not to do this.
"Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error." -HAL 9000
Re:The real difference between humans and machines (Score:2)
I think I've heard this one before... (Score:2, Informative)
"Only humans can..."? Can even humans? (Score:3, Interesting)
Consider the "deep understanding" of simple mathematics. But is your instant recall of 6 x 8 (assuming you can) anything deep, or just memorized, along with the symbol pushing to mechanically figure out tougher problems?
The problem lies in tying up a "symbol" in the mind (which may be more than literally a string of characters. However, it is an object) and something "out there". That's the tough issue, not the symbol pushing itself, necessarily.
Re:"Only humans can..."? Can even humans? (Score:2)
You're only saying that 'cos you can't get a look at the source code.
No, seriously. You can figure a program by its source code, but can you by its machine code? Or, let's be fair here, since it's the only way you can examine the brain, by watching the electrons fly in its wires?
Difference between man and machine (Score:3, Insightful)
A machine will work diligently until it physically breaks or encounters an error.
A man will figure out a way to avoid the work by creating a machine to do it for him, and then quickly move on to more pleasurable activities.
Re:Difference between man and machine (Score:2)
If robots do all the work, what is a dollar (pound rubel yen etc) worth? No, seriously. If we have robots doing all the manual labor, and thinker machines doing all the hard number crunching, do we end up as a race of philosophisers - or more likely, variants of Zaphod Beeblebrox?
Pot, meet Kettle (Score:4, Interesting)
"It seems that Dr Ellerman's essay considers two different types of
physical devices that are potentially indistinguishable on the basis
of behavior. "
It seems that the reviewer considers both mind and brain to both be purely physical things, and indeed synonyms - Physical devices that are thus potentially indistinguishable on the basis of behavior. Upon adopting this axiom, he is then (somewhat unsurprisingly) able to conclude there is no distinction! But the lack of a distinction simply arises from the fact that he has arbitrarily defined amind and brain into a single category in the first place.
Review translated: Trust me, I don't have any underlieing assumptions like he does, so I'm right and he's wrong, PH33R MY L33T PH1L0S0PHY SKILZ!
Re:Pot, meet Kettle (Score:3, Interesting)
To that end, no a computer doesn't have a mind, per se. We haven't written a good one for them yet.
11 year old book of crap reviewed here? Why? (Score:5, Insightful)
But what's really on my mind is this: Read the table of contents - this book could not possibly be anything but crap. I mean, what sense does it make to have one chapter called "Chapter 3: The Libertarian Case for Slavery" and once you're done with musings on economic theory, you toss off a Chapter 7 where you casually present your solution to the question about the difference between minds and machines? How promising is that? Not very. So while the review author may have torn this chapter a new orifice (and the thesis surely has many other problems to boot), I must say that I do not toast his choice of reading. This is crap that was ignored in 1995, and just because it's a $2.95 special at the used book store doesn't mean we need to hear the following on Slashdot:
Newsflash: Some crank wrote a stupid book 11 years ago and I found there is a problem with one of the chapters!!!!! Read on!!!!!
I'd have more sympathy if the text were available online so we could RTFA and have a substantive discussion, but in the absence of that, our only option is to flame the responsible.
This is religion, not science (Score:4, Interesting)
There is a similar thing going on with people who study how the human mind works. Some people, for religious reasons, refuse to believe that human beings and machines belong to the same category. Humans have souls, and machines do not. Therefore, a computer can never be programmed to have all the qualities of the human mind. It's harder to see this as a religious issue, since some of the people who hold this position are atheists who claim not to believe in souls or the supernatural. But what makes this a religious issue is that there is no amount of scientific evidence that can ever convince these people otherwise.
Anyway, the two camps have been arguing about this forever. It's impossible for a member of one camp to "convert" a member of the opposite camp using rational argument. So they resort to insults. People in the "strong ai" camp accuse the other camp of being Cartesian dualists, or believing in a supernatural soul. People in the "dualist" or "mysterian" camp accuse the strong ai folks of denying the existence of human consciousness and self awareness. According to the dualists, strong ai folk believe that humans are just machines, so humans can't be conscious in any real sense, don't have free will, and can't be morally responsible for their own actions. Some (stupid) strong ai folks even agree with these insults directed against them, which makes the debate more complicated, and more infuriating. The issue of moral responsibility, which is always bubbling under the surface of these debates, shows how this is really a religious issue at a deeper level.
For the record, I am a strong ai person who believes that human beings are deterministic machines who have consciousness, free will, and moral responsibility.
If you would like to read some good books that back up my position, see:
- How the brain works, by Pinker
- Freedom evolves, by Dennett
Doug Moen
Re:This is religion, not science (Score:2)
Re:This is religion, not science (Score:2)
Re:This is religion, not science (Score:2)
Re:This is religion, not science (Score:2)
Well, it seems intuitively obvious to me that, in order to have free will, I have to be deterministic, or at least largely deterministic. To the extent that my actions are controlled by random quantum fluctuations, or whatever, instead of by my will, to that extend I am lacking in free will.
I am a moral person, and I believe that killing is wrong. If you put me in a dangerous situation where one of my options would involve killi
Re:This is religion, not science (Score:2)
Re:This is religion, not science (Score:2)
1.People are machines.
2.Machines are for doing our bidding.
Ergo,
3.People are for doing our bidding.
When a machine can say "I am." I will defend its soul and free will as much as I would that of any meatbag, but until then the "mechanistic humans" school are just a cover for those who would like to enslave human beings.
Re:This is religion, not science (Score:3, Insightful)
It's easy for a machine to say "I am," it's difficult to know when it really means it.
Re:This is religion, not science (Score:2)
Re:This is religion, not science (Score:3, Interesting)
All in how you look at it.... (Score:4, Interesting)
Other work in this field is interesting, too (Score:2)
The Deux ex Machina (or vice versa) rage really has to do with context vs perceptions. We can all be robotic and make our
Re: (Score:2)
Re: (Score:3, Insightful)
It's the "soul" fallacy (Score:5, Insightful)
It's a silly exercise because there is nothing specific about humans except their ability to interbreed with other humans. That is all that technically defines us as a species, and even that definition is fuzzy, ignoring people who are sterile, too old or young to breed, or who never leave their keyboards long enough to look for a mate.
When it comes to the mind, emerging consensus is that it consists of a large number of well-designed tools, not some fuzzy blob of software. Most likely, each of these mental tools can be perfectly implemented as software. There are simply a huge number, and some are very, very subtle.
We will, eventually, be able to simulate the whole human mind in software, in the same way as we'll eventually be able to construct robotic bodies that work as well as human bodies, by re-implementing the structures that our genes build, one by one. The best way to construct a robotic hand that works like a human hand is to reimplement a human hand. The best way to construct a robotic mind that works like a human mind is to reimplement a human mind. This is perhaps self-evident but it's not always been accepted.
As for the arbitrary distinctions, this is just a belief system, an attempt to create a soul, however you phrase it.
The problem is context (Score:2, Insightful)
The difference between humans and machines is NOT semantics. If that were it, building human-like machines would be easy. And in fact for small trivial universes, this has been done.
The big difference is context. Many words in the human languages only acquire meaning by their context. That includes not only their place in the syntax, but their place in the semantics.
We currently don't understand how we humans remember contexts and how we apply symbols to the various contexts with which we are acquainted,
Free Will (Score:2)
Dijkstra quote (Score:2, Interesting)
philosophical argument (Score:2)
Sure are a lot of zombies in this thread... (Score:3, Interesting)
Don't get me wrong, I don't believe in mystical powers or anything. I accept the need for physical verificationism and the primacy of matter, and am a fan of Ockham's razor.[1 [tk421.net]] But there are some phenomenological properties of my experiences that sure ain't physical.
Re:Sure are a lot of zombies in this thread... (Score:3, Insightful)
I doubt it. In fact, I think your mind is nothing more than a wad of neural addition machines dutifully computing sums. I don't believe you that you have consciousness or self-awareness, and I challenge you to prove otherwise, knowing that you will be just as unable to do so as will the first machine to assert the same only to face a similar challenge from you.
Words, words... (Score:2, Insightful)
Re:Words, words... (Score:3, Insightful)
I think we also agree that we can't prove whether any individual human has these traits.
Why, then, do you assume that humans do but machines won't? At the very least, it seems to me that your assumption should be the same for both, since the behavioral cues are (by hypothesis) invariant.
Re:Sure are a lot of zombies in this thread... (Score:3, Insightful)
The problem is that people have what might be called an epistemological bias. People see their mental states from the "inside," and thus when they see how my mental states look from the "outside," as just a bunch of neurons flashing around, they can't help but feel that there's something missing. But ultimately I think that the evidence suggests that there is an exact one to one cor
Persistent and pernicious fallacies (Score:2)
So an electrical impulse mediated by wires and doped silicon is a "symbol", and an electrical impulse mediated by calcium ions and water has "semantics"?
Sounds like prejudice to me.
Woah Dr. Ellerman! (Score:2)
Sounds like nothing new? (Score:2)
Based on the extremely short treatment his essay is given in the review, Ellerman's The Semantics Differentiation of Minds and Machines sounds like a tired rehash of Searle's "Chinese Room" argument [utm.edu] - that is to say, a restatement of an argument that I didn't find that compelling the first time around. Douglas Hofstadter, writing about Searle's essay, called it "religious diatribe agains
Fast Translation: (Score:2)
should actually read,
"Because ***I*** am too stupid to figure out how to make a machine ascibe semantics to symbols, only minds have the additional capacity to ascribe semantics to symbols."
Arrogance is a wonderful thing. "I'm too stupid to figure it out, therefore it cannot be done."
no, there isn't (Score:2)
Maybe that's the latest fashion in philosophy, but I'm afraid philosophers are a bit out of touch with reality there: machines have no problem assigning semantics to symbols, and even learning semantics from experience.
Re:mind vs brain (Score:2)
Re: is the brain a digital computer? (Score:4, Informative)
by cognitive scientist john searle in his paper:
is the brain a digital computer? [soton.ac.uk]
in the summary, searle puts it this way:
--| Summary of the Argument |---
This brief argument has a simple logical structure and I will lay it out:
On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.
But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.
This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.
It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"
Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.
But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.
The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.
We cannot avoid the foregoing results by supposing that the brain is doing "information processing". The brain, as far as its intrinsic operations are concerned, does no information processing. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.\**
--
regards,
j [theidoctor.ca].
Re:mind vs brain (Score:2)
That said, I also think it is likely that we will see "sentient", whatever that means, computers someday. Whether this means that they appear for all intents and purposes to be possess human sentients or if they actually are sentient. The problem I see in our current endeavors is that we are working in bina
Re:mind vs brain (Score:2)
You obviously can't simulate the entire universe that way, and we don't have the technology to build a brain yet (aside from the old fashioned way). But if something exists it can exist twice.
QED (maybe)
Re:mind vs brain (Score:2)
Re:mind vs brain (Score:2)
I wasn't talking about vat-growing things, I mean actually measuring all the components and exactly reproducing them, down to an atomic scale where needed. Not physically impossible.
You can simulate a particular chair by measuring the chair, determining what materials it's made of, and creating a duplicate. You don't really need atomic accuracy in this case. You can then simulate the original chair - i.e. s
Re:mind vs brain (Score:2)
Re:mind vs brain (Score:2)
Re:mind vs brain (Score:3, Insightful)
Question - if your mind is "something that exists" that you know about, can it therefore be simulated in your mind? Certainly - you can take a guess what you would do do in a hypothetical situation, presumably by simulating your decision process at the tim
Re:mind vs brain-The Analog Hole. (Score:2, Interesting)
Re:This Slashvertisment was brought to you by... (Score:2)
I've got one point left and I've decided to reply instead.
The concept is interesting and I didn't even notice any sort of profit-linking. I was too caught up in the idea, and I don't buy books online. Monkey, you get a -1 in my book for this one (grandstanding? first-posting?).
Re:collaborative filtering (Score:2)
Re:A Beowulf cluster of PS3s (Score:2)
No. Just having a bunch of computers hooked together will produce nothing without some software.
We don't have software - but then, we're not general purpose devices. We're bred with some intrinsic stuff: find food. eat food. sleep. have sex.
There's other stuff in there, too: Poke things (they might be food). Look for new things (you might be able to eat/sleep on/have sex with them). Speak with other people (you might be able to eat/sleep with/have sex with them). Protect your territory (it's w
Re:Slavery? (Score:2)
I tend to agree, but we still have the notion of wage-slave. The ideal workplace (for me) would be one where I could show up whenever I wanted to do some amount of paid-for work untill I had sufficient funds, and then leave untill I needed the income again.
In reality, if I want to have some kind of income, I need no
Re:The Hard AI Problem (Score:2)
Is it? I thought Cyc was a knowledge-base and a theorem-prover. The theorem-prover manipulates symbols in the knowledge base.
In comparison, a human only manipulates symbols when doing arithmetic (and even mathematicians admit that arithmetic is something they do when doing proofs, but when approaching a new problem, they try to visualize it or "understand" it).
If cyc is told to find a route between points A and B on a map, it would use it's theorem-prover to manipulate