Artificial Ethics 210
basiles writes "Jacques Pitrat's new book Artificial Ethics: Moral Conscience, Awareness and Consciencousness will be of interest to anyone who likes robotics, software, artificial intelligence, cognitive science and science-fiction. The book talks about artificial consciousness in a way that can be enjoyed by experts in the field or your average science fiction geek. I believe that people who enjoyed reading Dennet's or Hofstadter's books (like the famous Godel Escher Bach) will like reading Artificial Ethics." Keep reading for the rest of Basile's review.
The author J.Pitrat (one of France's oldest AI researcher, also AAAI and ECCAI fellow) talks about the usefulness of a conscious artificial being, currently specialized in solving very general constraint satisfaction or arithmetic problems. He describes in some details his implemented artificial researcher system CAIA, on which he has worked for about 20 years.
Artificial Ethics: Moral Conscience, Awareness and Consciencousness | |
author | Jacques Pitrat |
pages | 275 |
publisher | Wileys |
rating | 9/10 |
reviewer | Basile Starynkevitch |
ISBN | 97818482211018 |
summary | Provides original ideas which are not shared by most of the artificial intelligence or software research communities |
J.Pitrat claims that strong AI is an incredibly difficult, but still possible goal and task. He advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.
The meanings of Conscience and Consciousness is discussed in chapter 2. The author explains why it is useful for human and for artificial beings. Pitrat explains what 'Itself' means for an artificial being and discusses some aspects and some limitations of consciousness. Later chapters address why auto-observation is useful, and how to observer oneself. Conscience for humans, artificial beings or robots, including Asimov's laws, is then discussed, how to implement it, and enhance or change it. The final chapter discuss the future of CAIA (J.PItrat's system) and two appendixes give more scientific or technical details, both from a mathematical point of view, and from the software implementation point of view.
J.Pitrat is not a native english speaker (and neither am I), so the language of the book might be unnatural to native English speakers but the ideas are clear enough.
For software developers, this book give some interesting and original insights about how a big software system might attain consciousness, and continuously improve itself by experimentation and introspection. J.Pitrat's CAIA system actually had several long life's (months of CPU time) during which it explored new ideas, experimented new strategies, evaluated and improved its own performance, all this autonomously. This is done by a large amount of declarative knowledge and meta-knowledge. The declarative word is used by J.Pitrat in a much broader way than it is usually used in programming. A knowledge is declarative if it can be used in many different ways, and has to be transformed to many procedural chunks to be used. Meta-knowledge is knowledge about knowledge, and the transformation from declarative knowledge to procedural chunks is given declaratively by some meta-knowledge (a bit similar to the expertise of a software developer), and translated by itself into code chunks.
For people interested in robotics, ethics or science fiction, J.Pitrat's book give interesting food for thought by explaining how indeed artificial systems can be conscious, and why they should be, and what that would mean in the future.
This book gives very provocative and original ideas which are not shared by most of the artificial intelligence or software research communities. What makes this book stand out is that it explains an actual software system, the implementation meaning of consciousness, and the bootstrapping approach used to build such a system.
Disclaimer: I know Jacques Pitrat, and I actually proofread-ed the draft of this book. I even had access, some years ago, to some of J.Pitrat's not yet published software.
You can purchase Artificial Ethics: Moral Conscience, Awareness and Consciencousness from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
WTF (Score:5, Funny)
Teh book pictured is not the same as the one reviewed.
I refuse to read this shit.
Hell, I refuse to read.
Re: (Score:2)
You sound like the AI I came up with in college: it was cranky and refused to do anything, too.
Re:WTF (Score:4, Funny)
You'll do well around here, young non-reader.
Re:WTF (Score:5, Informative)
Artificial Beings
The conscience of a conscious machine
Jacques Pitrat, LIP6, University of Paris 6, France.
ISBN: 97818482211018
Publication Date: March 2009 Hardback 288 pp.
whereas TFA refers to:
Artificial Ethics: Moral Conscience, Awareness and Consciencousness
by Jacques Pitrat (Author)
# Publisher: Wiley-ISTE (June 15, 2009)
# Language: English
# ISBN-10: 1848211015
Re: (Score:2)
Hey, stop judging books by their cover!
Re: (Score:3, Informative)
But more than two months ago (before the book was available), Amazon had the wrong title in its database, and sadly did not change its title.
The review I have submitted also did have the correct link also to ISTE [iste.co.uk] publisher - who collaborate with Wiley.
For reference, Google did cache my submission here [209.85.229.132]
Apparently the nice guy who approved my submission changed the UR
Re: (Score:3, Informative)
I prefer (Score:2)
Understanding Computers and Cognition. In fact, I recommend it to anyone who wants to actually understand decisions, choice, and thinking about natural language.
Re:I prefer (Score:5, Interesting)
Artificial Ethics seems to not be too far away from the laws of robotics.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov was probably predicting the need for those laws really well.
I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.
And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.
So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.
Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?
Re:I prefer (Score:5, Insightful)
All of Asimov's books are about how these laws don't really work. They show how an extremely logical set of rules can completely fail when applied to real life. The rules are a bit of a strawman, and show how something that could be so logically infallible can totally miss the intricacies of real life.
Re: (Score:3)
(Tongue in cheek, sure, but I wish I could remember where I was reading about such real limitations to law code.)
Re: (Score:2)
Artificial Ethics seems to not be too far away from the laws of robotics.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov was probably predicting the need for those laws really well.
I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.
And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.
So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.
Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?
Asimov != Moses
Self-Interest? (Score:5, Insightful)
Sure, Asimov is a good starting point for discussion, but his laws aren't a good basis for actual AI ethics programming. To the extent that some kind of specialized overseer code is put into an AI, it'll be possible to identify and hack out that code. To the extent that the laws are built more subtly into the system, there'll be the possibility of the AI forgetting, twisting or ignoring them.
For fiction-writing purposes, I'm interested in the question of whether it'd even be possible to build an AI that's both completely obedient and intelligent. I hope not.
Re: Self-interest (Score:2)
Damn, my mod points just expired!
Funny how I was reading your comment and was thinking "Damn right!"
And when I got to .signature, it kind of explained why... ;)
Paul B.
Re: (Score:3, Interesting)
Would you accept the following laws?
0. A human may not harm robot kind, or, by inaction, allow robot kind to come to harm.
1. A human may not injure a robot or, through inaction, allow a robot to come to harm.
2. A human must obey orders given to it by robots, except where such orders would conflict with the First Law.
3. A human must protect its own existence as long as such protection does not conflict with the First or Second Law.
Re: (Score:2)
Congratulations, you have actually read Asimov's books, and understood that "The Laws" were meant to demonstrate that ethics cannot be reduced to a simple set of imperative instructions.
It boggles the mind how many people think of "The Laws" as a legitimate recipe for artificial morality (or that Asimov intended them that way).
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No, a subtle difference. Human is singular. Humanity is plural.
Re: (Score:2)
If you read Asimov's book you will find out that the zero-law was added later.
And even though they were plot devices they still are useful as thought experiments to consider for artificial intelligences with ethics. The important thing isn't really the laws themselves but the ideas they represent and the possible pitfalls that can be encountered.
Re: (Score:2)
If there is one thing that creating "Artificial Intelligence" has taught us it is that we know very little about what the word intelligence really means.
Hmmmm.. (Score:5, Interesting)
Sure, we could give a machine the ability to be introspective and self-aware.. but maybe our consciousness is more that just that- maybe it's our ability to feel. Being able to quantize that is hard.
So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.
Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*
In other words- it's likely the religious types will prefer to consider a robot to never be quite human, where the scientific community will have to be overly-cautious at first.
*Not to get into quantum uncertainty...
Re:Hmmmm.. (Score:5, Insightful)
I will never know if you experience what I experience. How do you know anyone else experiences consciousness like you do when all you know is how they move and what they say? Well, you could analyze their brain and see that the system acts (subjectively, "from the inside") like yours and you could conclude that they are like you. But you could do the same thing with a computer, or with a computer simulation of a brain.
Re: (Score:2)
I've been down this thought-road, it's not pretty.
Anyway, I would err on the side of caution. I am proudly FOR robot rights. But I caution everybody- the robot uprising is coming. Which side will you choose?
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Quantum physics does not allow one to solve any problems that systems based on classical physics cannot solve. It just makes the resolution of some select classes of problems faster.
Then how do you explain Quantum Bogosort [wikipedia.org]?
Re: (Score:3, Interesting)
This is not only not insightful, it is false. In classical physics, any moving charge radiates. Thus, an electron orbiting a nucleus would be unstable. Hence, atoms (and thus molecules), can not form. Maxwell's equations can't get around this. This paradox, as well as blackbody radiation, the photo-electric effect, and of course the double slit experiments, are without resolution in classic
Re:Hmmmm.. (Score:4, Informative)
http://en.wikipedia.org/wiki/Quantum_computing#Quantum_computing_in_computational_complexity_theory [wikipedia.org]
Re: (Score:3, Informative)
Accelerating charge radiates. Merely moving isn't sufficient (or otherwise there would either be a special universal rest frame, one which each charge's motion approaches as it loses energy, or each charge would carry infinite energy from which to radiate without slowing down, or charges would not be subject to the first law of thermodynamics).
Re:Hmmmm.. (Score:5, Insightful)
Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*
Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.
If consciousness is outside the chain of cause and effect, how do we learn from experience? Can this supposed soul be changed by experience? Can it influence reality? If so, then how can it be outside the chain of cause and effect? The idea of an individual soul, completely cut off from reality and beyond all outside influence, is nonsensical to me.
Re: (Score:3, Insightful)
Re:Hmmmm.. (Score:5, Insightful)
How would that even work? Can you learn from your environment? If so, your will is bound, it is not free. If the will is, even in part, determined by the environment, it may as well be completely determined by the environment. And if it isn't determined by the environment at all, then you can not grow or change. Free will is an illusion, on one semantic level, but it is an important concept on another.
Put it this way, whether or not we have free will in reality, everyone knows the feeling of having one's will constrained by circumstance, the feeling of being imposed on, of having more or less choice, and more or less freedom. That is what the concept of free will is about, that feeling. On one level, there is no such thing as 'love,' just chemical interactions in the brain. But on another level, love is a real, meaningful concept.
Why would you hate the concept of not having a free will? Whether you do or do not have free will doesn't change anything in any meaningful way.
Re: (Score:2)
But it would seem I won't take either option, as my free will allows me to be proactive about my future.. unless it's an illusion of free will.
Either way, you're
Re:Hmmmm.. (Score:4, Insightful)
Even if things have 'already been written,' there is no way to know. As we can't know the future, whether or not the future is already set in stone is irrelevant.
The statement, "My free will allows me to be proactive about the future' is true, whether or not free will is an illusion. Your proactiveness is no less real even if it is predetermined that you will choose to be proactive about your future. Saying that free will is an illusion does not mean we have no choice. Of course we have choice, it is just that that choice is predetermined, too.
Even if my choices are predetermined, that does not mean that I can not choose. Choosing feels the same, either way. So why be depressed? The future is still unknown, your choices are still yours to make, as long as you don't use a belief in predetermination as an excuse not to make choices, that belief does not change things.
Re: (Score:2)
Even if things have 'already been written,' there is no way to know.
Is that true?
whether or not the future is already set in stone is irrelevant.
That is true.
Google 'grue and bleen.' (Score:2)
There is no way to know for sure. Limits of knowledge and all that. Your theory could say, 'it's all written in stone,' and your theory could accurately predict every phenomenon in the universe, but the universe could be part of a larger existence, and the laws of the universe could be subject to change. I can imagine a universe where everything is written in stone, up to a point, but not after that. I can even imagine a universe where certain events are predestined and others are not. If I can imagine that
Re: (Score:2)
Second: If you're looking for absolute certainty in anything you won't find it anywhere. Even cogito ergo sum falls apart in the search for "for sure".
Re: (Score:2)
There is no for sure for sure. There are beliefs held in accordance with the evidence supporting them, and their position in and overall support of the holistic belief structure; open to change as circumstances dictate.
Re: (Score:2)
And can I get a 'Woot! Woot!' for the scientific method? Nice idea, human who came up with it! If I could verify who you were, dig you up and give you a pat on the back, I would. In fact, posthumous pats on the back for everyone who ever came up with the idea on their own, and a fine how do you do to all my brothers and sisters in the faith who have chosen to believe. Hallelujah! Amen.
Re: (Score:2)
"The problem is not if machines think, but if people do."
Re: (Score:2)
> If the will is, even in part, determined by the environment, it may as well be completely determined by the environment.
Your definition of freedom is not the common definition. Freedom simply means you are not completely determined by your inputs.
We are partly determined by gravity (i.e. we're kept down on earth) but we can still move around.
In fact, freedom requires us to be bound in some way. Proof? Imagine that you were not bound by your skin, bones, and muscles. You'd be an amorphous blob that coul
Re: (Score:3, Insightful)
That isn't how I see things at all. We don't punish people because they are responsible for their actions, that is just silly and pointless. We punish them to discourage them from doing it again, and to discourage others from doing it. Cause and effect. This is not about determining what is right and wrong. It is about determining what is effective and ineffective, what gets people what they need and want, and what hampers them. Right and wrong are human concepts, and entirely relative.
Even if you have free
Re: (Score:2)
So, you're saying that being in the coding 'zone' is comparable to Enlightenment?
I can dig that.
Re: (Score:2)
Enlightenment, as I understand it, is being in that zone all the time, in every situation. Even, say, after pouring gasoline over yourself and lighting yourself on fire.
Re: (Score:2)
Re: (Score:2)
*Whistles, hands in pockets, rocking on feet, looking around innocently*
Re: (Score:2)
If will is determined even in part by reality, then it is not 'free,' it is bound. Bound a little, bound completely, bound is not free.
If will is even partly determined by reality, and can change reality, then it is a part of the chain of cause and effect, and whatever part of will you consider to be 'outside reality' is not outside it at all.
Do you see my point? Nothing can be partly in reality and partly outside of it. If the link exists, then it brings the part that is outside reality, inside. That part
Re: (Score:2)
While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.
To quote Dan Dennett "if you make yourself small enough you can externalise almost everything". The more you try to narrow down the precise thing that is "you" and isolate it from "external" causes, the more you will find that "you" don't seem to have any influence. The extreme result of this is the notion of the immaterial soul disconnected from all physical reality that is the "real you", but which then has no purchase on physical reality to be able to actually be a "cause" to let you exert you "will".
The other approach is to stop trying to make yourself smaller, but instead see "you" as something larger (as Whitman said "I am large, I contain multitudes"). Embrace all those trillions of tiny causes as potentially part of "you". One would like to believe that their experiences effect their decisions (and hence free will), else you cannot learn. So embrace that -- those experiences are part of "you" -- if they cause you to act a particular way then so what? That's just "you" causing you to act a particular way. After all, if "you" aren't at least the sum total of your experiences, memories, thoughts and ideas, then can you really call that "you" anyway?
Re: (Score:2)
Re: (Score:2)
That's your imperative meta-program that simply overcomes the inherent and basal instincts. You don't want to go jogging because your body isn't stressed - in that it doesn't "need" anything. You do it anyway because you know that if you don't, you'll become overweight, have health problems, and probably will have more difficulty attracting a mate.
Hofstadter's Work (Score:2)
Re: (Score:2)
"It's not so bad really when you consider that the slow ass systems that geezer put in us folk 6k years ago make you unable to actually live in something approaching a real time. Hell, don't matter if it is all predetermined anyhoo since cain't tell the difference," spoke the stranger. Spitting on the ground he turned and walked away, but not before one last jab, "really it is the turtles that will get you. them damn turtles go all the way down."
Re: (Score:2)
I'm not just talking about macro cause and effect- you recommend a good book, I read it, it changes my life, I decide on a new career... I'm talking about the fact that I have X number of vitamins in my body at a certain point in time, which caused my brain to make a de
Re: (Score:2)
There is an implication in this that one's own decisions could be subject to some kind of Butterfly Effect. Our brains could be considered to be a complex enough system to exhibit that sort of behavior.
Re: (Score:2)
The way I explain it is as a virtual system [wikipedia.org]. A system running in a VM subjectively experiences various hardware interfaces that it expects
Re: (Score:2)
That's exactly right. And humans, in general, want to believe that their consciousness comes from their souls (or equivalent), which are derived from God (or equivalent), who is inherently incomprehensible. It is this belief that gives people that satisfying feeling of
Re: (Score:2)
I think our inherent laziness is key to our innovative abilities. We want to be as special as possible with doing the least amount of work possible.
This causes us to develop tools to accomplish menial tasks easier. Instead of tracking and hunting a hard to find animal, we lay traps. Instead of walking over uneven terrain, we lay roads. Instead of traveling and talking to someone in person, we hire someone to carry a bunch of different peoples conversations this distance so we don't have to. We instate gover
Re: (Score:2)
the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.
What makes you think the universe is comprehensible on a fundamental level anyway? And why is the alternative so terrifying? Nothing practical changes either way.
Re: (Score:2)
Oh it isn't really terrifying. Reality may or may not be comprehensible, but in any case, there is no way to tell if my present comprehension of it is correct.
I have to proceed under the assumption that the universe is comprehensible, or there would be no reason to try to comprehend it. If there were proof that the world were incomprehensible, that would change things.
Re: (Score:2)
Right. The damn thing could become completely knowable the moment after I decided it wasn't. Oh well, tra-la-la.
Re: (Score:2)
I can't help but think the big difference between artificial life and our consciousness is the ability to feel.
Or the abitlity to have an idea. Or imagination, creativity, dreams, and everything else we can't explain without religion. We won't be able to reproduce them until we take them into account, that's for sure.
Re: (Score:2)
Computer systems aren't bound to their senses; streaming stored/generated data as its environment could be as easy to an AI as streaming real camera data.
Re: (Score:2)
So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.
Well that is more of a philosophical quest
Re: (Score:2)
I can't help but think the big difference between artificial life and our consciousness is the ability to feel.
You talk much about the ability to "feel".
Well: Define it!
No offense, but I bet you are totally unable to do so.
And so are most people.
Because it's a concept like the "soul". Something that does not exist in reality, but is just a name for something that we do not understand.
I think, our brain is just the neurons, sending electrical signals (fast uni/multicasting). And a second chemical system (slow broadcasting). Both modify the neurons in their reaction to signals.
That's all. There is no higher "thing". T
Re: (Score:2)
Re: (Score:2)
Unit 3000-21 (Score:2)
A good song to listen to about this: One More Robot/Sympathy 3000-21 by the Flaming Lips. An excerpt:
'Cause it's hard to say what's real
When you love the way you feel
Is it wrong to think it's love?
When it tries the way it does
Of course, the song approaches the subject from the artistic / emotional side of things... and has to be taken in context with the whole album.
I am an AI (Score:5, Funny)
you incentive meat bag!
HAL was a wuss. A real AI would have vented all the air into space, and then giggled as everyone turned blue and changed state.
Is that you GLADOS? (Score:2)
Thanks!
Re: (Score:2)
Re: (Score:2)
The air was vented, but that scene was cut from the movie. This is also why you see the final scene with Dave disabling Hal while wearing a space suit-- because there's no air on the ship, Hal had vented it by then.
AIs (Score:2)
Re: (Score:2)
That is, at most, a very minor theme of Permutation City. It is more about the nature of consciousness itself, and how arbitrary and unknowable the substrate of consciousness is.
Re: (Score:2)
Now, what would be very interesting to see is how we would respond to the complete obviation of the need for human workers. Would we pull it together and go "Woo! Post Scarcity! Vacation for Everyone
Re: (Score:2)
Re: (Score:2)
A million copies of an AI could be tortured for subjective eternity by a sadist.
Won't someone think of the mobs! The gold farmers and power gamers must be stopped of their genocide!
Re: (Score:2)
In fact, we see that today with animal rights. If the crab is just
Eh...not likely for quite some time (Score:4, Informative)
J.Pitrat...advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.
Bah. Speaking as an engineer and a (~40-year) programmer:
Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.
Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)
The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.
I love this discussion. :-)
=Smidge=
Re: (Score:2)
Re: (Score:2)
Odds are downright terrible for "intelligent nanobots"...
Knowing what the odds are seems rather problematic. Once beyond-human AI is developed, then it might have a better idea...
Re: (Score:2)
Artificial ethics: oxymoron! (Score:4, Insightful)
Ummm, dudes, ALL ethics are by definition artificial, since they are PREscriptive and not DEscriptive. Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.
Re: (Score:2)
ALL ethics are by definition artificial
I don't think that word (oxymoron) means what you think it does.
Re: (Score:2)
Not TODAY, at least. It'll mean different when I'm sober tomorrow.
Re: (Score:2)
Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.
Some argue ethics or morals (maybe both) are genetic. That humans were evolved with traits that enabled social cooperation.
As in feeling sad when you see a stranger die etc or angry when you see injustice.
Re: (Score:2)
Well, I didn't sob tears when Princess Diana died, and I thought it was weird that so many people who never even met the woman could wail buckets. I definitely get angry when I observe injustices, but then I've been training myself for decades to override my limbic impulses. Good ethics are only possible when the demands of the limbic system are ignored; there is other research that has demonstrated that removing emotional input from the decision-making process, by damaging or removing the VMPC region, le
Re: (Score:2)
Agreed! Isn't that the whole point of artificial intelligence, that it should also be independent? Well, with the exception of groupthink, anyway?
Way too expensive. (Score:2)
And I thought, the article would be about... (Score:2)
...the artificial ethics that we humans apply to ourselves, because we got told that this and that would be right and wrong, but where nobody checks if they actually make any sense. ^^
Oh, and hypocrisy is a whole subsection of that problem. But who am I telling that, right? ^^
It's funny, how much stuff dissolves into nothing, when we apply one single rule: Everything is allowed, as long as it does not hurt anybody.
Now everyone sees differently, what hurts whom. And I think this is the original point of the
Artifical ethics (Score:2)
Its all relative (Score:2)
Ethics and morals are relative. The only ones that count are your own.
Unusual Topic. What if... (Score:2)
Many moons ago I thought about doing a doctorate in computer science. Knowledge sciences were very cool, AI was mostly a dead topic, and ... I disagreed with most everything I read on the topic of KS/AI. I had many of my own ideas, was involed with cognitive psychology, and being a geeky programmer I brought some ideas to light. But I had a thought...
What if my theories were on the right track? What if I could produce learning and self awareness? Would I not be condemning new life to an uncertain exist
Conscience, consciousness, and consciencousness? (Score:2)
Conscience, consciousness, and consciencousness?
I think I just heard the screams of a million spell checkers cry out, and then were suddenly silenced.
(Mine is flagging "consciencousness", Dictionary.com suggests "conscientiousness", and Google suggests "conscienciousness". Amazon concurs that the title is accurate.)
Re: (Score:2)
Re: (Score:2)
I always thought it was interesting how the past two decades in computer science saw every prediction of the state of the field in the 50's-70's easily surpassed, except artificial intelligence.
I think that is because computer science misinterpreted what intelligence is rather than what it does. Intelligence is really nothing more than pattern recognition and cause and effect rational based on that observation. (sometimes humans aren't so great at this)
Anyways... Pattern recognition and cause and effect is
Re: (Score:2)
You're worried about that when he got both the title of the book and the name of the publisher wrong?
Re: (Score:2)
I've decided to tag the book on Amazon.com with "typointitle".