Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Book Reviews Books Media

The Semantics Differentiation of Minds and Machines 271

John David Funge writes "In Dr David Ellerman's book Intellectual Trespassing as a Way of Life there are a number of interesting essays. But there is one particular essay, entitled "The Semantics Differentiation of Minds and Machines," that caught my attention and which should be of interest to Slashdot readers. In that essay Dr Ellerman claims that "after several decades of debate, a definitive differentiation between minds and machines seems to be emerging into view." In particular, Dr Ellerman argues that the distinction between minds and machines is that while machines (i.e., computers) make excellent symbol manipulation devices, only minds have the additional capacity to ascribe semantics to symbols." Read the rest of John's review.
Intellectual Trespassing as a Way of Life
author David P. Ellerman
pages 290 pages
publisher Rowman & Littlefield Publishers, Inc.
rating 7
reviewer John David Funge
ISBN 0847679322
summary Dramatic changes or revolutions in a field of science are often made by outsiders or "trespassers".


However, Dr Ellerman's argument appears circular. In particular, Dr Ellerman seems to have decided that, by definition, the only possible semantic interpretation for any collection of wires, capacitors, transistors, etc. that we would commonly refer to as a "computer" is as nothing more than a symbol manipulation device. While a computer is indeed (at the very least) a symbol manipulation device, what is there to prevent another mind ascribing additional semantic interpretations to the collection of wires, capacitors, transistors, etc. that we commonly refer to as a "computer"? In particular, what if my mind were willing to make the semantic interpretation that a computer is a device that can both manipulate symbols and can also ascribe semantics to symbols.

Moreover, what if I one day met a collection of blood vessels, skin, bones, etc. called Dr Ellerman? What would prevent me from ascribing to him the semantic interpretation that he is nothing more than a symbolic manipulation device? After all, Dr Ellerman concedes that their may be no way of distinguishing minds from machines purely on the basis of behavior. That is he specifically acknowledges that computers may one day pass the Turing test. So why would my mind not then be able to legitimately ascribe any semantic interpretation (that fits the observed behavior) I see fit to either humans or machines?

It seems that Dr Ellerman's essay considers two different types of physical devices that are potentially indistinguishable on the basis of behavior. Then arbitrarily defines one type of device (computers) to correspond to nothing more than symbolic manipulation and the other (human brains) to have the additional ability to ascribe semantics. Upon adopting these two axioms, he is then (somewhat unsurprisingly) able to conclude there is a distinction! But the distinction simply arises from the fact that he has arbitrarily defined a distinction in the first place.

In another essay in the collection, entitled "Trespassing against the Happy Consciousness of Orthodox Economics," Dr Ellerman argues that modern Western societies are not as free from slavery as orthodox economics would have us believe. In particular, he concludes that work in non-democratic firms is nothing less than a form of "temporary voluntary slavery". It would be ironic therefore if his essay on minds and machines were one day used to justify the slavery of (non-human) machines. Indeed, Dr Ellerman's characterization of the supposed intrinsic differences between humans and machines is sadly reminiscent of the despicable and unscientific arguments about intrinsic racial differences that were once used to justify human slavery."


You can purchase Intellectual Trespassing as a Way of Life from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

The Semantics Differentiation of Minds and Machines

Comments Filter:
  • semiotics (Score:3, Informative)

    by gEvil (beta) ( 945888 ) on Friday January 20, 2006 @02:44PM (#14521245)
    I believe the proper term for this field is semiotics, [wikipedia.org] the study of the assignation of meaning to symbols and signs.
    • by Trepidity ( 597 )
      That is a term for it, and the distinction is more cultural and historical than scientific. European research into this collection of areas often is called "semiotics", and has a particular tradition. Anglosphere research into such areas has another tradition, and the term "semiotics" is rarely heard. Instead, various portions of such research take place under the aegis of "linguistics" (incl. semantics, and studying more than just traditional languages), "philosophy of language", "philosophy of mind", a
  • What?! (Score:2, Funny)

    What?! Does this mean no Sky-Net?!
    • Re:What?! (Score:2, Interesting)

      I believe that, while computers are a long way from it, artificial intelligence will eventually be able to properly attribute and understand symbols and symbolism.

      Part of it comes from an animal's and a human's instinct of matrixing, or interpreting input to formulate the situation. If there is a shake in the bushes, an animal will watch and try to decipher the form of a friend or of a foe. The same goes for symbolism in society. We attribute meanings to symbols because its in our nature to do so. It allo

      • I would think that the animal's reaction would be different depending on if the shake was chocolate or vanilla or strawberry. I find chocolate tends to be quite friendly.

        What a great example of your last sentence.
      • I believe that, while computers are a long way from it, artificial intelligence will eventually be able to properly attribute and understand symbols and symbolism.
        Then again, cockroaches may one day evolve into superintelligent beings as well. While recognizing the eventual potential of computers, I think it's perfectly fair to distinguish between minds and computers based on their present capabilities.
  • by Vengeance ( 46019 ) on Friday January 20, 2006 @02:47PM (#14521274)
    While there was long believed to be some sort of mystical special quality to organic molecules, eventually we figured out that chemistry is chemistry, and that simply by using Carbon we get interesting possiblities.

    I (so far) have not seen any reason to suppose that the difference between 'thought' and 'computing' is any different. Incorporate enough complexity in the right sort of organizational framework, and the two should be interchangable.
    • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Friday January 20, 2006 @03:20PM (#14521584) Homepage Journal
      Humans are excellent at differentiating between things that are really the same, or inventing totally new layers of reality because of flawed assumptions about the way the world works. Today, I think we've gone beyond needing to think of fire, earth, air and water as being the four elements from which all physical matter is constructed, and light does not need an aether to "travel through".


      For that reason, any attempt to differentiate the mind and computers by using comparisons that aren't really meaningful or applicable should be thrown out. Maybe computer-based intelligence will never exist, but if that is the case, it won't be for the reasons we're being given.


      For example, looking at the high-level functionality of the brain and comparing it with the transistors of a computer is an absolute give-away that the author isn't going to let the facts get in the way of a good story. The low-level mechanics of the brain (the chemical and electrical signalling) can be reasonably compared to the low-level mechanics of a computer, because it is valid to compare like with like. For the same reason, it would be fair to compare the Operating System of a computer to the ancient "reptilian" core of the brain. Both are designed for housekeeping operations and are used by the higher levels to mask implementation details. And so on up through the layers.


      It should also be kept in mind that the human brain is capable of almost ten times the throughput of a top-of-the-line supercomputer. Given that one of the limiting factors of parallel architectures is the interconnect, it does prove that our networking technology is still extremely primitive. This is important, because it is going to be hard to build a machine that can "think" like a human if we have the "neural" interconnects of a Diplodocus.


      At the current rate of technological progress, I do not believe we will have a computer powerful enough to model the human brain until 2015 or 2020. Even then, it'll be a Government-owned supercomputer likely used for weapon simulation. We won't see Strong AI researchers get hold of such machines until maybe 2060 and (if the usual development patterns hold) nobody will have any idea how to turn the raw computing power into something useful until 2100 at the earliest.


      So, really, the earliest we could possibly really know (for certain) that the mind is (or isn't) like a machine is 2100. Anything stated with certainty before then is pure ego-stroking for the group of supporters attached with one camp or the other. Doubly so when it is provably and obviously intended to be deceptive.


      The only problem I see with debating the matter from an intellectually honest standpoint until then is that current global warming models put most of the planet under water or under rainforest by 2100, which means that we might never really know the results of the research anyway.

      • At the current rate of technological progress, I do not believe we will have a computer powerful enough to model the human brain until 2015 or 2020.

        And even if we do, that is unlikely to be enough. The human brain is very much not a closed system. It depends on a terribly complex series of feedback mechanisms within the body, not to mention interaction with an environment that can meaningfully impinge on it.

        My point isn't to claim that this makes it impossible to model (that is, after all, an empirical

        • I would add that, in the right context, it would be very useful. Right now, vehicles and the like must be operated by humans.

          There may come a day when simple transport machines will be able to get from one place to another, fuel themselves, and become distressed in emergencies (triggering useful evasive behavior and such).

          Since all of the above are in the province of a fruit fly and we can provide senses many orders of magnitude better than those of a fruit fly (think GPS), this has near term practical con
      • While there are no estimated timescales here [bluebrainproject.epfl.ch], beyond simulating a neocortical column in two - three years, I would expect to knock a few decades off of your estimates if the project is a success. This is one of the most interesting research projects that are using the latest Blue Gene hardware. Another factor that will make these results occur at an earlier date than expected is the aspect of simulation speed. To get interesting results froma simulated brain would not require a 1:1 ratio between simulation
      • At the current rate of technological progress, I do not believe we will have a computer powerful enough to model the human brain until 2015 or 2020. Even then, it'll be a Government-owned supercomputer likely used for weapon simulation. We won't see Strong AI researchers get hold of such machines until maybe 2060 and (if the usual development patterns hold) nobody will have any idea how to turn the raw computing power into something useful until 2100 at the earliest.

        I think you might be underestimating acce [wikipedia.org]
      • At the current rate of technological progress, I do not believe we will have a computer powerful enough to model the human brain until 2015 or 2020.

        I think you misspelled 'amphibian'
      • The fundamental question that I see is to define "intelligence" and "thought". If an AI reacts within parameters that perfectly simulate an emotional response, will a human witness react to that apparent emotional state?

        If the AI subsequently follows a reasonable state change/learning of responses to simulate the emotional components of a long-term relationship (including conflicts and resolution), will the human consider the machine a true "friend" from an emotional perspective?

        If humanity as a whole

    • "I (so far) have not seen any reason to suppose that the difference between 'thought' and 'computing' is any different."

      I totally agree. Besides, saying that a computer is incapable of assigning semantics to sybols is incorrect, IMO.

      I'll use a complier as an example. A programming language is full of both syntax and semantics, and a compiler must be able to deal with both in order to understand a line as an instruction rather than a bunch of characters. It is giving a very real and purposeful meaning t

    • I (so far) have not seen any reason to suppose that the difference between 'thought' and 'computing' is any different.

      Why should it be, if you're talking just about the mechanism of "thought" (instead of self-awareness and consciousness)? You have sense inputs which you tie in to events (other sense inputs or inputs created internally) in a causal fashion (A follows B). Then you work your way around the environment by using these patterns to "predict" things (e.g. if you always get an electric shock after

    • The only important difference to me is between natural and artificial minds. Whether it runs on a computer or not just affects whether it is simulated or real. This solves the upcoming problem over basic human rights too... only natural minds have human rights, whether real or simulated. So if somebody gets brain-scanned into a computer and simulated then they should have all the normal rights that other computer-minds will not have.

      Of course the fact that we will soon be able to simulate a person's mind
      • Well, the problem arises when you have more than one copy of a brain. Does each copy have all the rights of the original? What if one copy breaks a law? Is that copy "in jail" or do all copies need to be punished? What happens if you make a copy of a mind and change a few neurons around and it very closely resembles a human mind, does this mind still get all the rights of a human? What if we design a "mind" that is far superior to human minds? Should this "mind" not get human rights too? I think you've over
      • The only important difference to me is between natural and artificial minds. Whether it runs on a computer or not just affects whether it is simulated or real.

        So what exactly is a computer? Biotechnology advances so rapidly, that we already have DNA-computing (used to solve an the Hamiltonian Path NP-complete problem in 1994). Is DNA-computers natural or artificial? Obviously, both sides can be argued, and the distinction will continue to blur further, as we get a better understanding of microbiology and

  • by Baldrson ( 78598 ) * on Friday January 20, 2006 @02:48PM (#14521288) Homepage Journal
    Peter Turney's Learning Analogies and Semantic Relations [cogprints.org] falsifies the Ellerman's assertion that semantics is out of the reach of engineering. Turney's more recent Human-Level Performance on Word Analogy Questions by Latent Relational Analysis (Warning: PDF) [nrc-cnrc.gc.ca] shows an engine performing about as well as college-bound seniors taking the SAT verbal analogies test.

    For a review of Peter Turney's group's accomplishment see "AI Breakthrough or the Mismeasure of Machine? [newsvine.com]"

    • The term "semantics" seems to be misused to indicate some notion external to the machine's system in an attempt to ascribe special abilities to the human intellect, sort of like how the "soul" is used to connect humanity to the divine. Semantics are expressed simply as a system of conversion between one system and another. How this becomes mystified in relation to computers is that the second system is the natural world, about which computers have little knowledge, lacking natural senses and innate evalua
  • Upon consideration (Score:3, Insightful)

    by SpinyNorman ( 33776 ) on Friday January 20, 2006 @02:49PM (#14521297)
    I've evaluated this claim in light of the mind being the product of a neural machine, and have determined it to be a load of bollocks.
  • turing test (Score:3, Insightful)

    by dirvish ( 574948 ) <dirvish&foundnews,com> on Friday January 20, 2006 @02:50PM (#14521302) Homepage Journal
    That is he specifically acknowledges that computers may one day pass the Turing test.

    A computer will one day be sophiscated enough to manipulate symbols sufficiently to pass the Turing test. I don't believe that means it is sentient and/or has a mind. It may be time to move beyond the Turing test as the rule for artificial intelligence.
    • by Doc Ri ( 900300 )
      It may be time to move beyond the Turing test as the rule for artificial intelligence.

      Especially since there is a considerable number of humans that would not pass the Turing Test.
    • The Turing Test is very simply expressed but has a large number of possible variations on the basic idea: A computer can be considered sentient when a human can't tell the difference between a computer and a human.

      So far, this has been limited to mere conversation, but there are all sort of things where a computer can be tested against human cognition. Jokes, song and poetry composition, empathy and sympathy, vindictiveness, anxiety, joy, and depression are all areas that computers need severe improvment
    • There's of course a lot of discussion in epistemology whether the Turing test is meaningful at all - Searle's Chinese Box paper discussed the idea of whether or not correctly interpreting input can be said to be intelligence.

      But basically it comes down to a deeper philosophical divide. Turing took the pragmatic approach of assuming that anything that was indistinguishable from a thinking being was, for all intents and purposes, a thinking being. Other philosophers might feel that the pragmatism isn't nece
  • False presumption (Score:5, Insightful)

    by blair1q ( 305137 ) on Friday January 20, 2006 @02:50PM (#14521304) Journal
    Semantics are associations between symbols.

    So whatever this guy is on about, he's got it wrong.

    Computers are perfectly capable of making fuzzy inferences from loose associations.

    With a greater understanding of real connections, they will be better able to weed out the fuzzy associations and strengthen the remaining ones.

    This is how intellectual learning works.

    And there's no reason a computer can't simulate it better than a human can.
    • Re:False presumption (Score:3, Interesting)

      by radtea ( 464814 )
      Semantics are associations between symbols.

      Semantics are actions. "Associations between symbols" is mathematics, and pure mathematics at that: a closed universe of symbols that can be manipulated according to rules. Semantics, on the other hand, is what the symbols impel us to do. Speech is, of course, action, so semantics can impel us to argue, as well as running away, juggling, seducing (well, not anyone on /.) or whatever.

      What something means is what we do, how we act, when we grasp the meaning.

      This
  • by nexarias ( 944986 ) on Friday January 20, 2006 @02:54PM (#14521347)
    I don't think the reviewer has demonstrated adequate mastery of the subject (artificial intelligence) and its present studies. For example, the problem of assigning meaning to symbols is a BIG one, and the defining of computers as symbol manipulators is NOT arbitrary. This problem first arose when Thomas Hobbes talked of the mind as a symbol manipulator and Descartes rubbished his argument, pointing out the problem of Original Meaning (how symbols come to indicate this or that in the first place).

    Computers as symbol manipulators is also an idea that arose from John Searle's "Chinese Room argument". Perhaps one of the best contemporary discussions is by John Haugeland in his book "Ariticial Intelligence: The Very Idea".

    Overall, a seemingly immature review of the book. Disappointing.

    • What do you mean? I'm aware that philosophers tend to think the problem of assigning meaning to symbols is a big deal, but how are they right? When a symbol has meaning all that means is that some mind (a human, for example) associates one idea (the visual stimuli of the character '3') with another idea (the mental concept of the number three). One thing associates a second thing with a third. Nothing which can't be trivially duplicated in an arbitrary mechanism.

      That said, you're right that this review is s
      • Here are a couple of points to ponder about semantics:
        1. The claim that semantics is entirely arbitrary necessarily implies that any association of symbols with ideas is equally valid and useful. (If not, then there is an underlying relationship between at least some symbols and some ideas, which strips away the "entirely" from "arbitrary.") Yet, in every language, the words for different numbers are linked together ("three", "thirteen", "thirty-three"). The words for family members are linked together ("m
  • by chriss ( 26574 ) * <chriss@memomo.net> on Friday January 20, 2006 @02:56PM (#14521363) Homepage

    Basically: a symbol is a variable and can hold any value. If a system knows that Dolly is a sheep and that sheeps are animals and that animals eat, it can guess that Dolly eats. But it cannot tell if Dolly is a plane, unless someone somewhere made that relation (planes are machines, machines are not living beings, animals are living beings, so Dolly can't be a plane). They would need an unlimited amount of rules.

    A human "knows" about the meaning (semantic) of the symbol "sheep". Although this has never been discussed, he could answer that a sheep will not stand still if set on fire. The question is how the human is able to tell this. He does not need a sharp line of arguments.

    But maybe he simply uses an enormous amount of small rules that seem to form something more complex called semantic in the sense of the article. The OpenCyc [opencyc.org] project assumes this and tries to teach a machine millions of small rules (assertions and concepts) to create sort of common sense based on a real world view (requiring to "know" about the world) in software.

    • Basically, your post ignores some of the most import advances in the last 30 years of AI research, so it is hard to find credible. Here's a hint, hardly anyone uses rule based AI anymore. In fact, the exact behavoir you describe (drawing inferences) is the very basis for an expert system i.e. using an array of different facts and combining them to reason a conclusion. The only "rules" involved are universally laws of logic (e.g. if a is a subset of b, and b is a subset of c then a is also a subset of c, et
  • by ENOENT ( 25325 ) on Friday January 20, 2006 @02:57PM (#14521374) Homepage Journal
    When a human makes a mistake, it immediately pours massive processing power into either formulating arguments about why it's not a mistake or finding someone else to blame for it.

    Machines tend not to do this.
  • by Anonymous Coward
    *COUGH*Searle*COUGH
  • by Impy the Impiuos Imp ( 442658 ) on Friday January 20, 2006 @03:00PM (#14521410) Journal
    The AI community has suggested that what humans believe is some kind of "deep understanding" is nothing of the sort. We have just learned to push symbols around, too.

    Consider the "deep understanding" of simple mathematics. But is your instant recall of 6 x 8 (assuming you can) anything deep, or just memorized, along with the symbol pushing to mechanically figure out tougher problems?

    The problem lies in tying up a "symbol" in the mind (which may be more than literally a string of characters. However, it is an object) and something "out there". That's the tough issue, not the symbol pushing itself, necessarily.
  • by digitaldc ( 879047 ) on Friday January 20, 2006 @03:01PM (#14521417)
    It would be ironic therefore if his essay on minds and machines were one day used to justify the slavery of (non-human) machines.

    A machine will work diligently until it physically breaks or encounters an error.

    A man will figure out a way to avoid the work by creating a machine to do it for him, and then quickly move on to more pleasurable activities.
  • Pot, meet Kettle (Score:4, Interesting)

    by Artifakt ( 700173 ) on Friday January 20, 2006 @03:07PM (#14521464)
    "After all, Dr Ellerman concedes that their may be no way of distinguishing minds from machines purely on the basis of behavior."
    "It seems that Dr Ellerman's essay considers two different types of
    physical devices that are potentially indistinguishable on the basis
    of behavior. "


    It seems that the reviewer considers both mind and brain to both be purely physical things, and indeed synonyms - Physical devices that are thus potentially indistinguishable on the basis of behavior. Upon adopting this axiom, he is then (somewhat unsurprisingly) able to conclude there is no distinction! But the lack of a distinction simply arises from the fact that he has arbitrarily defined amind and brain into a single category in the first place.

    Review translated: Trust me, I don't have any underlieing assumptions like he does, so I'm right and he's wrong, PH33R MY L33T PH1L0S0PHY SKILZ!
  • by Dr. Spork ( 142693 ) on Friday January 20, 2006 @03:08PM (#14521470)
    Sorry, I teach philosophy in college and I read student essays like this every semester. This one seems reasonably insightful, probably B+ (though I haven't read the book myself, so I can't say whether it misrepresents the position).

    But what's really on my mind is this: Read the table of contents - this book could not possibly be anything but crap. I mean, what sense does it make to have one chapter called "Chapter 3: The Libertarian Case for Slavery" and once you're done with musings on economic theory, you toss off a Chapter 7 where you casually present your solution to the question about the difference between minds and machines? How promising is that? Not very. So while the review author may have torn this chapter a new orifice (and the thesis surely has many other problems to boot), I must say that I do not toast his choice of reading. This is crap that was ignored in 1995, and just because it's a $2.95 special at the used book store doesn't mean we need to hear the following on Slashdot:

    Newsflash: Some crank wrote a stupid book 11 years ago and I found there is a problem with one of the chapters!!!!! Read on!!!!!

    I'd have more sympathy if the text were available online so we could RTFA and have a substantive discussion, but in the absence of that, our only option is to flame the responsible.

  • by dmoen ( 88623 ) on Friday January 20, 2006 @03:10PM (#14521489) Homepage
    The "intelligent design" crowd is a group of people who, for religious reasons, refuse to believe that human beings and animals belong to the same category. Since it's inconceivable that humans evolved from non-human animals, the theory of evolution must be overthrown, and another theory erected in its place.

    There is a similar thing going on with people who study how the human mind works. Some people, for religious reasons, refuse to believe that human beings and machines belong to the same category. Humans have souls, and machines do not. Therefore, a computer can never be programmed to have all the qualities of the human mind. It's harder to see this as a religious issue, since some of the people who hold this position are atheists who claim not to believe in souls or the supernatural. But what makes this a religious issue is that there is no amount of scientific evidence that can ever convince these people otherwise.

    Anyway, the two camps have been arguing about this forever. It's impossible for a member of one camp to "convert" a member of the opposite camp using rational argument. So they resort to insults. People in the "strong ai" camp accuse the other camp of being Cartesian dualists, or believing in a supernatural soul. People in the "dualist" or "mysterian" camp accuse the strong ai folks of denying the existence of human consciousness and self awareness. According to the dualists, strong ai folk believe that humans are just machines, so humans can't be conscious in any real sense, don't have free will, and can't be morally responsible for their own actions. Some (stupid) strong ai folks even agree with these insults directed against them, which makes the debate more complicated, and more infuriating. The issue of moral responsibility, which is always bubbling under the surface of these debates, shows how this is really a religious issue at a deeper level.

    For the record, I am a strong ai person who believes that human beings are deterministic machines who have consciousness, free will, and moral responsibility.

    If you would like to read some good books that back up my position, see:
    - How the brain works, by Pinker
    - Freedom evolves, by Dennett

    Doug Moen
    • Determinism and free will are directly conflicting ideas. Please explain.
    • There is a similar thing going on with people who study how the human mind works. Some people, for religious reasons, refuse to believe that human beings and machines belong to the same category. Humans have souls, and machines do not. Therefore, a computer can never be programmed to have all the qualities of the human mind. It's harder to see this as a religious issue, since some of the people who hold this position are atheists who claim not to believe in souls or the supernatural. But what makes this a r
    • The problem with the materialist school is that their opinion allows for this logic:

      1.People are machines.
      2.Machines are for doing our bidding.
      Ergo,
      3.People are for doing our bidding.

      When a machine can say "I am." I will defend its soul and free will as much as I would that of any meatbag, but until then the "mechanistic humans" school are just a cover for those who would like to enslave human beings.
    • I just listened to an interview with Dennett and although I agree with you that he says consciousness as something unmeasurable and distinct from processing (because it is intrinsically subjective), I didn't get the impression that he felt that a computer COULD NOT in principle have consciousness. He seemed quite open minded on those sorts of issues. Maybe rats have consciousness. Maybe not. Maybe an advanced computer could acquire consciousness. Maybe not. Maybe bacteria or chess programs have consciousnes
      • Dennett is a materialist. He believes that machines can in principle be built that have consciousness, because there is no fundamental distinction between humans and machines: humans and machines are both made of matter, and their properties derive entirely from their structure and the properties of matter. In the book that I cited, Dennett argues that free will is compatible with determinism, and he argues that free will is not an all or nothing proposition. Humans were not created, ab initio, by God, w
  • by NiteShaed ( 315799 ) on Friday January 20, 2006 @03:11PM (#14521503)
    Talking about machine intelligence is tricky in that we generally only consider *human* intellegence (which makes sense considering that's what we are). In John Varley's "Steel Beach", he suggested The Invaders (a mysterious species of aliens) might not consider humans an intelligent species, but looked at us as just another engineering species like bees, meaning intelligence is really dependant on your point of view. What we're really talking about when most people say Artificial Intelligence is actually more an issue of Artificial Humanity.
  • Jeremey Campbell's Grammatical Man-- on Entropy in Information Systems, goes a long way towards explaining both linguistics and even genetic concepts in communications, of which semiotics is a discipline. Although this book is out of print, it goes a long way towards explaining how we arrive at correct information when assembling data, and how various communications systems avoid entropy.

    The Deux ex Machina (or vice versa) rage really has to do with context vs perceptions. We can all be robotic and make our
  • Comment removed based on user account deletion
  • by pieterh ( 196118 ) on Friday January 20, 2006 @03:22PM (#14521601) Homepage
    "Humans are different from X because they can do Y", where X is variously "animals", "machines", and Y is variously, "make tools", "use language", "play chess", "murder", or whatever.

    It's a silly exercise because there is nothing specific about humans except their ability to interbreed with other humans. That is all that technically defines us as a species, and even that definition is fuzzy, ignoring people who are sterile, too old or young to breed, or who never leave their keyboards long enough to look for a mate.

    When it comes to the mind, emerging consensus is that it consists of a large number of well-designed tools, not some fuzzy blob of software. Most likely, each of these mental tools can be perfectly implemented as software. There are simply a huge number, and some are very, very subtle.

    We will, eventually, be able to simulate the whole human mind in software, in the same way as we'll eventually be able to construct robotic bodies that work as well as human bodies, by re-implementing the structures that our genes build, one by one. The best way to construct a robotic hand that works like a human hand is to reimplement a human hand. The best way to construct a robotic mind that works like a human mind is to reimplement a human mind. This is perhaps self-evident but it's not always been accepted.

    As for the arbitrary distinctions, this is just a belief system, an attempt to create a soul, however you phrase it.
  • by markdj ( 691222 )

    The difference between humans and machines is NOT semantics. If that were it, building human-like machines would be easy. And in fact for small trivial universes, this has been done.

    The big difference is context. Many words in the human languages only acquire meaning by their context. That includes not only their place in the syntax, but their place in the semantics.

    We currently don't understand how we humans remember contexts and how we apply symbols to the various contexts with which we are acquainted,

  • It is suprising how the link between intelligence, and the non derterministic nature of the universe (translated to free will, in humans) is constantly treated as irrelavent - when in fact it is probably the most important factor of all. When technology to make intelligent machines comes of age, I think people are going to be in for a very rude supprise. They will be able to make machines that are "intelligent" but don't do what they want, or they are going to be able to make machines that do what they wa
  • Dijkstra quote (Score:2, Interesting)

    by Kintalis ( 592836 )
    One of my favorite Edsger Dijkstra quotes:
    "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
    -K
  • this entire problem is equivalent to asking whether or not the human mind is entirely physical, or if the mind and brain are separate (and thus the mind is separate from the body), otherwise known as the mind/body problem. if the mind is not separate, then theoretically, there should be no trouble simulating a human mind with a computer. so, in order for the point to hold the mind needs to be separate. a few years back, i had discussion with one of my philosophy profs, and what emerged was an interesting ar
  • by tiltowait ( 306189 ) on Friday January 20, 2006 @03:45PM (#14521809) Homepage Journal
    Am I the only one here with internal experiences? Eveyone else seems to readily equate the mind with a machine.

    Don't get me wrong, I don't believe in mystical powers or anything. I accept the need for physical verificationism and the primacy of matter, and am a fan of Ockham's razor.[1 [tk421.net]] But there are some phenomenological properties of my experiences that sure ain't physical.
    • But there are some phenomenological properties of my experiences that sure ain't physical.

      I doubt it. In fact, I think your mind is nothing more than a wad of neural addition machines dutifully computing sums. I don't believe you that you have consciousness or self-awareness, and I challenge you to prove otherwise, knowing that you will be just as unable to do so as will the first machine to assert the same only to face a similar challenge from you.

      • Words, words... (Score:2, Insightful)

        by tiltowait ( 306189 )
        My point was that subjective states can't be scientifically verified, just correlated to neural stuff. So your argument about this hypothetical perfect Turing machine is valid, but it sure doesn't negate that people have more feelings than a Chinese room [wikipedia.org] would.
        • We agree that we won't be able to prove whether machines have "sentience," "self-awareness," or whatever.

          I think we also agree that we can't prove whether any individual human has these traits.

          Why, then, do you assume that humans do but machines won't? At the very least, it seems to me that your assumption should be the same for both, since the behavioral cues are (by hypothesis) invariant.
    • I am my mind. An internal sensation such as "the feeling of blue" is nothing more than how my mind procceses some visual input.

      The problem is that people have what might be called an epistemological bias. People see their mental states from the "inside," and thus when they see how my mental states look from the "outside," as just a bunch of neurons flashing around, they can't help but feel that there's something missing. But ultimately I think that the evidence suggests that there is an exact one to one cor
  • This is just another variation on Searle's Chinese Room argument. Like that one, it suffers from the fallacy of composition.

    So an electrical impulse mediated by wires and doped silicon is a "symbol", and an electrical impulse mediated by calcium ions and water has "semantics"?

    Sounds like prejudice to me.
  • You're coming dangerously close to ascribing a sanctity of human mind and free will there! People don't like hearing that they have free will or anything near a "soul" anymore, they'd rather find themselves equivalent with a bunch of wires so they can manipulate each other's brains (management, psychiatry, propaganda) without rousing their conscience!
  • (Disclaimer: I have been out of the cognitive science game for a long, long time, and was only a student even back then.)

    Based on the extremely short treatment his essay is given in the review, Ellerman's The Semantics Differentiation of Minds and Machines sounds like a tired rehash of Searle's "Chinese Room" argument [utm.edu] - that is to say, a restatement of an argument that I didn't find that compelling the first time around. Douglas Hofstadter, writing about Searle's essay, called it "religious diatribe agains
  • > ...while machines (i.e., computers) make excellent symbol manipulation devices, only minds have the additional capacity to ascribe semantics to symbols."

    should actually read,

    "Because ***I*** am too stupid to figure out how to make a machine ascibe semantics to symbols, only minds have the additional capacity to ascribe semantics to symbols."

    Arrogance is a wonderful thing. "I'm too stupid to figure it out, therefore it cannot be done."
  • Dr Ellerman claims that "after several decades of debate, a definitive differentiation between minds and machines seems to be emerging into view."

    Maybe that's the latest fashion in philosophy, but I'm afraid philosophers are a bit out of touch with reality there: machines have no problem assigning semantics to symbols, and even learning semantics from experience.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...