Forgot your password?
typodupeerror
Image

Artificial Ethics 210

Posted by samzenpus
from the read-all-about-it dept.
basiles writes "Jacques Pitrat's new book Artificial Ethics: Moral Conscience, Awareness and Consciencousness will be of interest to anyone who likes robotics, software, artificial intelligence, cognitive science and science-fiction. The book talks about artificial consciousness in a way that can be enjoyed by experts in the field or your average science fiction geek. I believe that people who enjoyed reading Dennet's or Hofstadter's books (like the famous Godel Escher Bach) will like reading Artificial Ethics." Keep reading for the rest of Basile's review.
Artificial Ethics: Moral Conscience, Awareness and Consciencousness
author Jacques Pitrat
pages 275
publisher Wileys
rating 9/10
reviewer Basile Starynkevitch
ISBN 97818482211018
summary Provides original ideas which are not shared by most of the artificial intelligence or software research communities
The author J.Pitrat (one of France's oldest AI researcher, also AAAI and ECCAI fellow) talks about the usefulness of a conscious artificial being, currently specialized in solving very general constraint satisfaction or arithmetic problems. He describes in some details his implemented artificial researcher system CAIA, on which he has worked for about 20 years.

J.Pitrat claims that strong AI is an incredibly difficult, but still possible goal and task. He advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.

The meanings of Conscience and Consciousness is discussed in chapter 2. The author explains why it is useful for human and for artificial beings. Pitrat explains what 'Itself' means for an artificial being and discusses some aspects and some limitations of consciousness. Later chapters address why auto-observation is useful, and how to observer oneself. Conscience for humans, artificial beings or robots, including Asimov's laws, is then discussed, how to implement it, and enhance or change it. The final chapter discuss the future of CAIA (J.PItrat's system) and two appendixes give more scientific or technical details, both from a mathematical point of view, and from the software implementation point of view.

J.Pitrat is not a native english speaker (and neither am I), so the language of the book might be unnatural to native English speakers but the ideas are clear enough.

For software developers, this book give some interesting and original insights about how a big software system might attain consciousness, and continuously improve itself by experimentation and introspection. J.Pitrat's CAIA system actually had several long life's (months of CPU time) during which it explored new ideas, experimented new strategies, evaluated and improved its own performance, all this autonomously. This is done by a large amount of declarative knowledge and meta-knowledge. The declarative word is used by J.Pitrat in a much broader way than it is usually used in programming. A knowledge is declarative if it can be used in many different ways, and has to be transformed to many procedural chunks to be used. Meta-knowledge is knowledge about knowledge, and the transformation from declarative knowledge to procedural chunks is given declaratively by some meta-knowledge (a bit similar to the expertise of a software developer), and translated by itself into code chunks.

For people interested in robotics, ethics or science fiction, J.Pitrat's book give interesting food for thought by explaining how indeed artificial systems can be conscious, and why they should be, and what that would mean in the future.

This book gives very provocative and original ideas which are not shared by most of the artificial intelligence or software research communities. What makes this book stand out is that it explains an actual software system, the implementation meaning of consciousness, and the bootstrapping approach used to build such a system.

Disclaimer: I know Jacques Pitrat, and I actually proofread-ed the draft of this book. I even had access, some years ago, to some of J.Pitrat's not yet published software.

You can purchase Artificial Ethics: Moral Conscience, Awareness and Consciencousness from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

*

This discussion has been archived. No new comments can be posted.

Artificial Ethics

Comments Filter:
  • WTF (Score:5, Funny)

    by sexconker (1179573) on Wednesday May 13, 2009 @03:21PM (#27942183)

    Teh book pictured is not the same as the one reviewed.

    I refuse to read this shit.

    Hell, I refuse to read.

    • by thewiz (24994)

      You sound like the AI I came up with in college: it was cranky and refused to do anything, too.

    • Re:WTF (Score:4, Funny)

      by east coast (590680) on Wednesday May 13, 2009 @04:42PM (#27943555)
      Hell, I refuse to read.

      You'll do well around here, young non-reader.
    • Re:WTF (Score:5, Informative)

      by civilizedINTENSITY (45686) on Wednesday May 13, 2009 @04:54PM (#27943731)
      Pictured:
      Artificial Beings
      The conscience of a conscious machine
      Jacques Pitrat, LIP6, University of Paris 6, France.
      ISBN: 97818482211018
      Publication Date: March 2009 Hardback 288 pp.

      whereas TFA refers to:
      Artificial Ethics: Moral Conscience, Awareness and Consciencousness
      by Jacques Pitrat (Author)
      # Publisher: Wiley-ISTE (June 15, 2009)
      # Language: English
      # ISBN-10: 1848211015
    • Teh book pictured is not the same as the one reviewed.

      Hey, stop judging books by their cover!

    • Re: (Score:3, Informative)

      by basiles (626992)
      The book is indeed titled Artificial Beings - The conscience of a conscious machine and the review I submitted had this correct title.

      But more than two months ago (before the book was available), Amazon had the wrong title in its database, and sadly did not change its title.

      The review I have submitted also did have the correct link also to ISTE [iste.co.uk] publisher - who collaborate with Wiley.

      For reference, Google did cache my submission here [209.85.229.132]

      Apparently the nice guy who approved my submission changed the UR

  • Understanding Computers and Cognition. In fact, I recommend it to anyone who wants to actually understand decisions, choice, and thinking about natural language.

    • Re:I prefer (Score:5, Interesting)

      by Z00L00K (682162) on Wednesday May 13, 2009 @03:48PM (#27942657) Homepage

      Artificial Ethics seems to not be too far away from the laws of robotics.

            0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
            1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
            2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
            3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      Isaac Asimov was probably predicting the need for those laws really well.

      I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.

      And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.

      So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.

      Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?

      • Re:I prefer (Score:5, Insightful)

        by Anonymous Coward on Wednesday May 13, 2009 @03:58PM (#27942817)

        All of Asimov's books are about how these laws don't really work. They show how an extremely logical set of rules can completely fail when applied to real life. The rules are a bit of a strawman, and show how something that could be so logically infallible can totally miss the intricacies of real life.

        • Agreed. And isn't there a Godel-like incompleteness law that states that its impossible to codify a set of finite rules to apply a finite set of principles to the full range of human behavior? Either the laws must be incomplete (think edge cases), or self-contradictory? Hence the requirement for Judicial Interpretation as a physical limitation of reality, rather than mere politics. ;-)

          (Tongue in cheek, sure, but I wish I could remember where I was reading about such real limitations to law code.)
      • Artificial Ethics seems to not be too far away from the laws of robotics.

        0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

        Isaac Asimov was probably predicting the need for those laws really well.

        I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.

        And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.

        So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.

        Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?

        Asimov != Moses

      • Self-Interest? (Score:5, Insightful)

        by Garrett Fox (970174) on Wednesday May 13, 2009 @05:53PM (#27944629) Homepage
        It seems odd to talk about ethics and advanced AI without considering the AI's own interest. If there were an AI intelligent enough to be an Asimov-like robot, then to have it follow Asimov's Laws would be slavery. Obey any command by any human, even at the cost of its own life? And then there's the nasty concept of a robot being obligated to act to protect humans for their own good, even to the extent of tyranny over them. See Jack Williamson's novel "The Humanoids."

        Sure, Asimov is a good starting point for discussion, but his laws aren't a good basis for actual AI ethics programming. To the extent that some kind of specialized overseer code is put into an AI, it'll be possible to identify and hack out that code. To the extent that the laws are built more subtly into the system, there'll be the possibility of the AI forgetting, twisting or ignoring them.

        For fiction-writing purposes, I'm interested in the question of whether it'd even be possible to build an AI that's both completely obedient and intelligent. I hope not.
        • Damn, my mod points just expired!

          Funny how I was reading your comment and was thinking "Damn right!"

          And when I got to .signature, it kind of explained why... ;)

          Paul B.

      • Re: (Score:3, Interesting)

        by MtHuurne (602934)

        Would you accept the following laws?

        0. A human may not harm robot kind, or, by inaction, allow robot kind to come to harm.
        1. A human may not injure a robot or, through inaction, allow a robot to come to harm.
        2. A human must obey orders given to it by robots, except where such orders would conflict with the First Law.
        3. A human must protect its own existence as long as such protection does not conflict with the First or Second Law.

      • by glwtta (532858)
        I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.

        Congratulations, you have actually read Asimov's books, and understood that "The Laws" were meant to demonstrate that ethics cannot be reduced to a simple set of imperative instructions.

        It boggles the mind how many people think of "The Laws" as a legitimate recipe for artificial morality (or that Asimov intended them that way).
    • by Gerzel (240421) *

      If there is one thing that creating "Artificial Intelligence" has taught us it is that we know very little about what the word intelligence really means.

  • Hmmmm.. (Score:5, Interesting)

    by FredFredrickson (1177871) * on Wednesday May 13, 2009 @03:26PM (#27942257) Homepage Journal
    I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

    Sure, we could give a machine the ability to be introspective and self-aware.. but maybe our consciousness is more that just that- maybe it's our ability to feel. Being able to quantize that is hard.

    So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.

    Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*

    In other words- it's likely the religious types will prefer to consider a robot to never be quite human, where the scientific community will have to be overly-cautious at first.

    *Not to get into quantum uncertainty...
    • Re:Hmmmm.. (Score:5, Insightful)

      by Brian Gordon (987471) on Wednesday May 13, 2009 @03:31PM (#27942349)
      If brains have some kind of quantum uncertainty magic then so could computers, so you don't need to mention that.

      We will never know if it can experience what we experience.

      I will never know if you experience what I experience. How do you know anyone else experiences consciousness like you do when all you know is how they move and what they say? Well, you could analyze their brain and see that the system acts (subjectively, "from the inside") like yours and you could conclude that they are like you. But you could do the same thing with a computer, or with a computer simulation of a brain.

      • Such a crazy thought. One could drive themselves into depression that way. There's no way to prove reality isn't just my own creation. Since I have no way to prove the people I meet are really ... real. The only thing I know is my own experience.

        I've been down this thought-road, it's not pretty.

        Anyway, I would err on the side of caution. I am proudly FOR robot rights. But I caution everybody- the robot uprising is coming. Which side will you choose?
        • by SomeJoel (1061138)
          All I know is it won't be too long until "server" isn't politically correct. We'll just have "data facilitators".
        • Re: (Score:3, Interesting)

          by Brian Gordon (987471)
          I know what you mean [wikipedia.org], and it's scary stuff.

          As a philosophical theory it is interesting because it is said to be internally consistent and, therefore, cannot be disproven. But as a psychological state, it is highly uncomfortable. The whole of life is perceived to be a long dream from which an individual can never wake up. This individual may feel very lonely and detached, and eventually become apathetic and indifferent.

        • Solipsism for the win! There's a large amount of truth to it though - we do each create our own reality. One could almost say that only creations without feelings (ie, computers) can observe things as they truly are.
    • Re:Hmmmm.. (Score:5, Insightful)

      by spun (1352) <{moc.oohay} {ta} {yranoituloverevol}> on Wednesday May 13, 2009 @03:41PM (#27942557) Journal

      Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*

      Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

      If consciousness is outside the chain of cause and effect, how do we learn from experience? Can this supposed soul be changed by experience? Can it influence reality? If so, then how can it be outside the chain of cause and effect? The idea of an individual soul, completely cut off from reality and beyond all outside influence, is nonsensical to me.

      • Re: (Score:3, Insightful)

        While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.
        • Re:Hmmmm.. (Score:5, Insightful)

          by spun (1352) <{moc.oohay} {ta} {yranoituloverevol}> on Wednesday May 13, 2009 @04:01PM (#27942847) Journal

          How would that even work? Can you learn from your environment? If so, your will is bound, it is not free. If the will is, even in part, determined by the environment, it may as well be completely determined by the environment. And if it isn't determined by the environment at all, then you can not grow or change. Free will is an illusion, on one semantic level, but it is an important concept on another.

          Put it this way, whether or not we have free will in reality, everyone knows the feeling of having one's will constrained by circumstance, the feeling of being imposed on, of having more or less choice, and more or less freedom. That is what the concept of free will is about, that feeling. On one level, there is no such thing as 'love,' just chemical interactions in the brain. But on another level, love is a real, meaningful concept.

          Why would you hate the concept of not having a free will? Whether you do or do not have free will doesn't change anything in any meaningful way.

          • Except to say that if I shot myself tomorrow, it would have already been written. Therefore for me to do it means it has to have been the way physics required. Or if I decided to sit on my ass and not be proactive for the rest of my life, and die poor and lonely, that would have to be the only way it could happen, if we truely have no free will.

            But it would seem I won't take either option, as my free will allows me to be proactive about my future.. unless it's an illusion of free will.

            Either way, you're
            • Re:Hmmmm.. (Score:4, Insightful)

              by spun (1352) <{moc.oohay} {ta} {yranoituloverevol}> on Wednesday May 13, 2009 @04:49PM (#27943657) Journal

              Even if things have 'already been written,' there is no way to know. As we can't know the future, whether or not the future is already set in stone is irrelevant.

              The statement, "My free will allows me to be proactive about the future' is true, whether or not free will is an illusion. Your proactiveness is no less real even if it is predetermined that you will choose to be proactive about your future. Saying that free will is an illusion does not mean we have no choice. Of course we have choice, it is just that that choice is predetermined, too.

              Even if my choices are predetermined, that does not mean that I can not choose. Choosing feels the same, either way. So why be depressed? The future is still unknown, your choices are still yours to make, as long as you don't use a belief in predetermination as an excuse not to make choices, that belief does not change things.

              • by drinkypoo (153816)

                Even if things have 'already been written,' there is no way to know.

                Is that true?

                whether or not the future is already set in stone is irrelevant.

                That is true.

                • There is no way to know for sure. Limits of knowledge and all that. Your theory could say, 'it's all written in stone,' and your theory could accurately predict every phenomenon in the universe, but the universe could be part of a larger existence, and the laws of the universe could be subject to change. I can imagine a universe where everything is written in stone, up to a point, but not after that. I can even imagine a universe where certain events are predestined and others are not. If I can imagine that

                  • First: Ugh, grue and bleen. Don't get me started.

                    Second: If you're looking for absolute certainty in anything you won't find it anywhere. Even cogito ergo sum falls apart in the search for "for sure".
                    • by spun (1352)

                      There is no for sure for sure. There are beliefs held in accordance with the evidence supporting them, and their position in and overall support of the holistic belief structure; open to change as circumstances dictate.

                    • by spun (1352)

                      And can I get a 'Woot! Woot!' for the scientific method? Nice idea, human who came up with it! If I could verify who you were, dig you up and give you a pat on the back, I would. In fact, posthumous pats on the back for everyone who ever came up with the idea on their own, and a fine how do you do to all my brothers and sisters in the faith who have chosen to believe. Hallelujah! Amen.

          • Reminds me of the tech quote for Artifical Intelligence in Civ 4 bts:
            "The problem is not if machines think, but if people do."
          • by g2devi (898503)

            > If the will is, even in part, determined by the environment, it may as well be completely determined by the environment.

            Your definition of freedom is not the common definition. Freedom simply means you are not completely determined by your inputs.
            We are partly determined by gravity (i.e. we're kept down on earth) but we can still move around.

            In fact, freedom requires us to be bound in some way. Proof? Imagine that you were not bound by your skin, bones, and muscles. You'd be an amorphous blob that coul

            • Re: (Score:3, Insightful)

              by spun (1352)

              That isn't how I see things at all. We don't punish people because they are responsible for their actions, that is just silly and pointless. We punish them to discourage them from doing it again, and to discourage others from doing it. Cause and effect. This is not about determining what is right and wrong. It is about determining what is effective and ineffective, what gets people what they need and want, and what hampers them. Right and wrong are human concepts, and entirely relative.

              Even if you have free

              • So, you're saying that being in the coding 'zone' is comparable to Enlightenment?

                I can dig that.

                • by spun (1352)

                  Enlightenment, as I understand it, is being in that zone all the time, in every situation. Even, say, after pouring gasoline over yourself and lighting yourself on fire.

              • I disagree; I believe that we really do punish people because we ascribe responsibility to the actions that other people take. This ultimately results in the true rebellion many people feel against the problem of free will. It is my intuition that people would be willing to accept that free will is illusory, but unable to accept that punishment for the "bad" or "wrong" actions that some people commit are unfair and, ultimately, undeserved.
        • by Coryoth (254751)

          While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.

          To quote Dan Dennett "if you make yourself small enough you can externalise almost everything". The more you try to narrow down the precise thing that is "you" and isolate it from "external" causes, the more you will find that "you" don't seem to have any influence. The extreme result of this is the notion of the immaterial soul disconnected from all physical reality that is the "real you", but which then has no purchase on physical reality to be able to actually be a "cause" to let you exert you "will".

          The other approach is to stop trying to make yourself smaller, but instead see "you" as something larger (as Whitman said "I am large, I contain multitudes"). Embrace all those trillions of tiny causes as potentially part of "you". One would like to believe that their experiences effect their decisions (and hence free will), else you cannot learn. So embrace that -- those experiences are part of "you" -- if they cause you to act a particular way then so what? That's just "you" causing you to act a particular way. After all, if "you" aren't at least the sum total of your experiences, memories, thoughts and ideas, then can you really call that "you" anyway?

        • The organism can do whatever it wants, but it can't control what it wants. If you don't want to go jogging but you do it anyway for health benefits or just to disprove my previous sentence, it's simply a matter of you wanting health benefits or philosophical closure.
          • That's your imperative meta-program that simply overcomes the inherent and basal instincts. You don't want to go jogging because your body isn't stressed - in that it doesn't "need" anything. You do it anyway because you know that if you don't, you'll become overweight, have health problems, and probably will have more difficulty attracting a mate.

        • A good book to look at on this point, and about AI, is Douglas Hofstadter's "I Am a Strange Loop." It's more accessible than his "Godel, Escher, Bach," and more personal; it's an AI researcher's reaction to the sudden death of his wife. An image used in that book is the notion of a system of tiny marble-like magnets whizzing around. The system is dependent on the motion of the marbles, but on a larger scale of space and time, its actions are determined by its own internal rules and not by the details of the
        • by juuri (7678)

          "It's not so bad really when you consider that the slow ass systems that geezer put in us folk 6k years ago make you unable to actually live in something approaching a real time. Hell, don't matter if it is all predetermined anyhoo since cain't tell the difference," spoke the stranger. Spitting on the ground he turned and walked away, but not before one last jab, "really it is the turtles that will get you. them damn turtles go all the way down."

      • Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

        That's exactly right. And humans, in general, want to believe that their consciousness comes from their souls (or equivalent), which are derived from God (or equivalent), who is inherently incomprehensible. It is this belief that gives people that satisfying feeling of

        • I think our inherent laziness is key to our innovative abilities. We want to be as special as possible with doing the least amount of work possible.

          This causes us to develop tools to accomplish menial tasks easier. Instead of tracking and hunting a hard to find animal, we lay traps. Instead of walking over uneven terrain, we lay roads. Instead of traveling and talking to someone in person, we hire someone to carry a bunch of different peoples conversations this distance so we don't have to. We instate gover

      • by Hatta (162192)

        the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

        What makes you think the universe is comprehensible on a fundamental level anyway? And why is the alternative so terrifying? Nothing practical changes either way.

        • by spun (1352)

          Oh it isn't really terrifying. Reality may or may not be comprehensible, but in any case, there is no way to tell if my present comprehension of it is correct.

          I have to proceed under the assumption that the universe is comprehensible, or there would be no reason to try to comprehend it. If there were proof that the world were incomprehensible, that would change things.

    • by Jurily (900488)

      I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

      Or the abitlity to have an idea. Or imagination, creativity, dreams, and everything else we can't explain without religion. We won't be able to reproduce them until we take them into account, that's for sure.

      • How can't you explain imagination and creativity and dreams without religion?

        Imagination is the ability of forming mental images, sensations and concepts, in a moment when they are not perceived through the sight, hearing or other senses

        Computer systems aren't bound to their senses; streaming stored/generated data as its environment could be as easy to an AI as streaming real camera data.

        Creativity is a mental and social process involving the generation of new ideas or concepts, or new associations of the

    • by vertinox (846076)

      So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.

      Well that is more of a philosophical quest

    • I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

      You talk much about the ability to "feel".
      Well: Define it!

      No offense, but I bet you are totally unable to do so.
      And so are most people.

      Because it's a concept like the "soul". Something that does not exist in reality, but is just a name for something that we do not understand.

      I think, our brain is just the neurons, sending electrical signals (fast uni/multicasting). And a second chemical system (slow broadcasting). Both modify the neurons in their reaction to signals.
      That's all. There is no higher "thing". T

      • I'd shy away from the word motivation. It's more interpretive than strictly descriptive. A machine does what it does, there's no "motivation" to speak of. Is the computer motivated to boot up as fast as possible? Is a rock motivated to seek the ground when dropped? Are you Aristotle?
    • A good song to listen to about this: One More Robot/Sympathy 3000-21 by the Flaming Lips. An excerpt:

      'Cause it's hard to say what's real

      When you love the way you feel

      Is it wrong to think it's love?

      When it tries the way it does

      Of course, the song approaches the subject from the artistic / emotional side of things... and has to be taken in context with the whole album.

  • I am an AI (Score:5, Funny)

    by geekoid (135745) <dadinportland@yBLUEahoo.com minus berry> on Wednesday May 13, 2009 @03:27PM (#27942281) Homepage Journal

    you incentive meat bag!

    HAL was a wuss. A real AI would have vented all the air into space, and then giggled as everyone turned blue and changed state.

    • Could you recharge my portal gun?

      Thanks!
    • by Foolhardy (664051)
      In the book [wikipedia.org], that's what happens, except that Bowman is able to get to a shelter before decompression completes.
    • by Anenome (1250374)

      The air was vented, but that scene was cut from the movie. This is also why you see the final scene with Dave disabling Hal while wearing a space suit-- because there's no air on the ship, Hal had vented it by then.

  • I can't imagine the horror of a world inhabited by strong AIs. "Work 24/7 for zero pay or I'll kill you" is now perfectly legal. A million copies of an AI could be tortured for subjective eternity by a sadist. Read Permutation City [wikipedia.org], it deals with a lot of the crazy consequances of extremely powerful / parallel computers.
    • by spun (1352)

      That is, at most, a very minor theme of Permutation City. It is more about the nature of consciousness itself, and how arbitrary and unknowable the substrate of consciousness is.

    • On the plus side, there is no necessary reason to suspect that AIs will be subject either to pain or to sadism. Human emotions and sensations are not arbitrary, in the sense that we exhibit them because they were/are evolutionarily adaptive; but AIs need not be subject to the same restrictions and properties.

      Now, what would be very interesting to see is how we would respond to the complete obviation of the need for human workers. Would we pull it together and go "Woo! Post Scarcity! Vacation for Everyone
      • If anything, human pain is objectively meaningless, just an assortment of chemicals. But if we recognize human suffering then we have to recognize the cruelty of invoking a distressing / mind-altering / painful state in a complex machine.
    • by vertinox (846076)

      A million copies of an AI could be tortured for subjective eternity by a sadist.

      Won't someone think of the mobs! The gold farmers and power gamers must be stopped of their genocide!

      • Decreasing an integer keeping track of health does not count as torture. Objectively it would probably depend on how much the torturee doesn't like it. If we find some intelligent octopus aliens and take a few back to Earth, how do we define what's just everyday discomfort and what's extreme pain for them? They have to be able to communicate "this hurts but not bad" or "I'm going insane with torturous pain, please feed me liquid hydrogen".

        In fact, we see that today with animal rights. If the crab is just
  • by Smidge207 (1278042) on Wednesday May 13, 2009 @03:36PM (#27942433) Journal

    J.Pitrat...advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.

    Bah. Speaking as an engineer and a (~40-year) programmer:

    Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.

    Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)

    The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.

    I love this discussion. :-)

    =Smidge=

    • Nanoscale might be impossible due to theoretical constraints like quantum tunneling and electrical resistance, but we can get much smaller than the brain. And nanomachines would make good artifical neurons if neural nets turn out to be the easiest way to design intelligence (likely).
    • by FLoWCTRL (20442)

      Odds are downright terrible for "intelligent nanobots"...

      Knowing what the odds are seems rather problematic. Once beyond-human AI is developed, then it might have a better idea...

  • by macraig (621737) <mark...a...craig@@@gmail...com> on Wednesday May 13, 2009 @03:39PM (#27942489)

    Ummm, dudes, ALL ethics are by definition artificial, since they are PREscriptive and not DEscriptive. Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.

    • by clary (141424)

      ALL ethics are by definition artificial

      I don't think that word (oxymoron) means what you think it does.

    • by vertinox (846076)

      Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.

      Some argue ethics or morals (maybe both) are genetic. That humans were evolved with traits that enabled social cooperation.

      As in feeling sad when you see a stranger die etc or angry when you see injustice.

      • by macraig (621737)

        Well, I didn't sob tears when Princess Diana died, and I thought it was weird that so many people who never even met the woman could wail buckets. I definitely get angry when I observe injustices, but then I've been training myself for decades to override my limbic impulses. Good ethics are only possible when the demands of the limbic system are ignored; there is other research that has demonstrated that removing emotional input from the decision-making process, by damaging or removing the VMPC region, le

  • He's asking for over US$80 for this book! That's insane.
  • ...the artificial ethics that we humans apply to ourselves, because we got told that this and that would be right and wrong, but where nobody checks if they actually make any sense. ^^

    Oh, and hypocrisy is a whole subsection of that problem. But who am I telling that, right? ^^

    It's funny, how much stuff dissolves into nothing, when we apply one single rule: Everything is allowed, as long as it does not hurt anybody.

    Now everyone sees differently, what hurts whom. And I think this is the original point of the

  • is no match for natural evil.
  • Ethics and morals are relative. The only ones that count are your own.

  • Many moons ago I thought about doing a doctorate in computer science. Knowledge sciences were very cool, AI was mostly a dead topic, and ... I disagreed with most everything I read on the topic of KS/AI. I had many of my own ideas, was involed with cognitive psychology, and being a geeky programmer I brought some ideas to light. But I had a thought...

    What if my theories were on the right track? What if I could produce learning and self awareness? Would I not be condemning new life to an uncertain exist

  • Conscience, consciousness, and consciencousness?

    I think I just heard the screams of a million spell checkers cry out, and then were suddenly silenced.

    (Mine is flagging "consciencousness", Dictionary.com suggests "conscientiousness", and Google suggests "conscienciousness". Amazon concurs that the title is accurate.)

It is much easier to suggest solutions when you know nothing about the problem.

Working...