Artificial Ethics 210
basiles writes "Jacques Pitrat's new book Artificial Ethics: Moral Conscience, Awareness and Consciencousness will be of interest to anyone who likes robotics, software, artificial intelligence, cognitive science and science-fiction. The book talks about artificial consciousness in a way that can be enjoyed by experts in the field or your average science fiction geek. I believe that people who enjoyed reading Dennet's or Hofstadter's books (like the famous Godel Escher Bach) will like reading Artificial Ethics." Keep reading for the rest of Basile's review.
The author J.Pitrat (one of France's oldest AI researcher, also AAAI and ECCAI fellow) talks about the usefulness of a conscious artificial being, currently specialized in solving very general constraint satisfaction or arithmetic problems. He describes in some details his implemented artificial researcher system CAIA, on which he has worked for about 20 years.
Artificial Ethics: Moral Conscience, Awareness and Consciencousness | |
author | Jacques Pitrat |
pages | 275 |
publisher | Wileys |
rating | 9/10 |
reviewer | Basile Starynkevitch |
ISBN | 97818482211018 |
summary | Provides original ideas which are not shared by most of the artificial intelligence or software research communities |
J.Pitrat claims that strong AI is an incredibly difficult, but still possible goal and task. He advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.
The meanings of Conscience and Consciousness is discussed in chapter 2. The author explains why it is useful for human and for artificial beings. Pitrat explains what 'Itself' means for an artificial being and discusses some aspects and some limitations of consciousness. Later chapters address why auto-observation is useful, and how to observer oneself. Conscience for humans, artificial beings or robots, including Asimov's laws, is then discussed, how to implement it, and enhance or change it. The final chapter discuss the future of CAIA (J.PItrat's system) and two appendixes give more scientific or technical details, both from a mathematical point of view, and from the software implementation point of view.
J.Pitrat is not a native english speaker (and neither am I), so the language of the book might be unnatural to native English speakers but the ideas are clear enough.
For software developers, this book give some interesting and original insights about how a big software system might attain consciousness, and continuously improve itself by experimentation and introspection. J.Pitrat's CAIA system actually had several long life's (months of CPU time) during which it explored new ideas, experimented new strategies, evaluated and improved its own performance, all this autonomously. This is done by a large amount of declarative knowledge and meta-knowledge. The declarative word is used by J.Pitrat in a much broader way than it is usually used in programming. A knowledge is declarative if it can be used in many different ways, and has to be transformed to many procedural chunks to be used. Meta-knowledge is knowledge about knowledge, and the transformation from declarative knowledge to procedural chunks is given declaratively by some meta-knowledge (a bit similar to the expertise of a software developer), and translated by itself into code chunks.
For people interested in robotics, ethics or science fiction, J.Pitrat's book give interesting food for thought by explaining how indeed artificial systems can be conscious, and why they should be, and what that would mean in the future.
This book gives very provocative and original ideas which are not shared by most of the artificial intelligence or software research communities. What makes this book stand out is that it explains an actual software system, the implementation meaning of consciousness, and the bootstrapping approach used to build such a system.
Disclaimer: I know Jacques Pitrat, and I actually proofread-ed the draft of this book. I even had access, some years ago, to some of J.Pitrat's not yet published software.
You can purchase Artificial Ethics: Moral Conscience, Awareness and Consciencousness from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Re:Hmmmm.. (Score:5, Insightful)
I will never know if you experience what I experience. How do you know anyone else experiences consciousness like you do when all you know is how they move and what they say? Well, you could analyze their brain and see that the system acts (subjectively, "from the inside") like yours and you could conclude that they are like you. But you could do the same thing with a computer, or with a computer simulation of a brain.
Artificial ethics: oxymoron! (Score:4, Insightful)
Ummm, dudes, ALL ethics are by definition artificial, since they are PREscriptive and not DEscriptive. Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.
Re:Hmmmm.. (Score:5, Insightful)
Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*
Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.
If consciousness is outside the chain of cause and effect, how do we learn from experience? Can this supposed soul be changed by experience? Can it influence reality? If so, then how can it be outside the chain of cause and effect? The idea of an individual soul, completely cut off from reality and beyond all outside influence, is nonsensical to me.
Re:Hmmmm.. (Score:3, Insightful)
Re:Hmmmm.. (Score:1, Insightful)
Quantum physics does not allow one to solve any problems that systems based on classical physics cannot solve. It just makes the resolution of some select classes of problems faster.
There is also no more worth in quantum uncertainty than there is in thermodynamic noise, not to mention that there are interpretations of quantum physics (Bohmian and many-worlds) that are both coherent with all observations and 100% deterministic.
Quantum computing is quite cool but to say that it has anything to do with our consciousness, intelligence or is required to do AI is misguided at best.
Re:I prefer (Score:5, Insightful)
All of Asimov's books are about how these laws don't really work. They show how an extremely logical set of rules can completely fail when applied to real life. The rules are a bit of a strawman, and show how something that could be so logically infallible can totally miss the intricacies of real life.
Re:Hmmmm.. (Score:5, Insightful)
How would that even work? Can you learn from your environment? If so, your will is bound, it is not free. If the will is, even in part, determined by the environment, it may as well be completely determined by the environment. And if it isn't determined by the environment at all, then you can not grow or change. Free will is an illusion, on one semantic level, but it is an important concept on another.
Put it this way, whether or not we have free will in reality, everyone knows the feeling of having one's will constrained by circumstance, the feeling of being imposed on, of having more or less choice, and more or less freedom. That is what the concept of free will is about, that feeling. On one level, there is no such thing as 'love,' just chemical interactions in the brain. But on another level, love is a real, meaningful concept.
Why would you hate the concept of not having a free will? Whether you do or do not have free will doesn't change anything in any meaningful way.
Re:Hmmmm.. (Score:1, Insightful)
At first sight it may seem so, but you don't have to see yourself as separate from your "environment". For example, if I define "you" as being the system comprised of all the molecules in your body, I would say that your choices do indeed come from "you" for the most part and therefore you have "free will".
In other words, instead of saying your choices are a product of trillions of different causes in your environment (which I infer is what you meant to say here), you could say that "you" are a product of the environment and your choices are a product of "you". If you made different choices then you wouldn't be you, you would be someone else. And you can't choose who you are without violating the most elementary rules of causation.
Let's put it this way: your behavior is the product of processes in your brain. By any measure, these processes belong to you. Moreover, it very much makes sense to say that they *define* you. It doesn't matter whether the world is deterministic or not or whether a soul exists or not. It is obvious in all cases that you define your behavior.
Re:Hmmmm.. (Score:4, Insightful)
Even if things have 'already been written,' there is no way to know. As we can't know the future, whether or not the future is already set in stone is irrelevant.
The statement, "My free will allows me to be proactive about the future' is true, whether or not free will is an illusion. Your proactiveness is no less real even if it is predetermined that you will choose to be proactive about your future. Saying that free will is an illusion does not mean we have no choice. Of course we have choice, it is just that that choice is predetermined, too.
Even if my choices are predetermined, that does not mean that I can not choose. Choosing feels the same, either way. So why be depressed? The future is still unknown, your choices are still yours to make, as long as you don't use a belief in predetermination as an excuse not to make choices, that belief does not change things.
Re:Hmmmm.. (Score:3, Insightful)
That isn't how I see things at all. We don't punish people because they are responsible for their actions, that is just silly and pointless. We punish them to discourage them from doing it again, and to discourage others from doing it. Cause and effect. This is not about determining what is right and wrong. It is about determining what is effective and ineffective, what gets people what they need and want, and what hampers them. Right and wrong are human concepts, and entirely relative.
Even if you have free will, you have no way of knowing whether you are rational or not. Your argument is entirely tangential, so much so that I can't even determine what you are trying to prove.
People are not rational. That has been proven, over and over again. Games theory experiments show that people almost never make the most rational choice. Nobody is completely rational.
Certainty is a feeling, like joy or hatred. The brain does not arrive at the feeling of certainty through a rational process, but rather through a holistic, emotional process that is not rational at all.
People do not make decisions, and then act on them. They act, and then make up a story about why they did what they did. That story, even if true, is never the whole truth.
The sense of self is just a sense, like hearing or sight. All the senses are tracks on the movie of life, like the sound track is on a real movie. Nobody is watching the movie. There is no little man looking out of your eyes and listening through your ears. There is no one at the helm. Your thoughts do not come from you, and neither do they come from outside you.
When you are totally in the moment, say an intense coding session, or athletic competition, all sense of self goes away. There is no separation between observer and observed. The sense of self isn't needed, so it isn't referred to.
We are model makers. We make models of the world. Our sense of self exists to show us how we relate to the models we've made. That is all.
Self-Interest? (Score:5, Insightful)
Sure, Asimov is a good starting point for discussion, but his laws aren't a good basis for actual AI ethics programming. To the extent that some kind of specialized overseer code is put into an AI, it'll be possible to identify and hack out that code. To the extent that the laws are built more subtly into the system, there'll be the possibility of the AI forgetting, twisting or ignoring them.
For fiction-writing purposes, I'm interested in the question of whether it'd even be possible to build an AI that's both completely obedient and intelligent. I hope not.