Yeah, Asimov had a lot of fun with all that. Him and Clarke were the dogs bollocks (as we say over here - it's a good thing
), along with Philip K. Dick, Aldus Huxley, Robert Heinlein and many many others, for exploring future possiblities (and in Clarke's case "inventing" satellites! We'll see if his space-elevator ever comes about eh? Pulling ourselves up by our bootlaces - could be fun
)
They all talked about morality in the context of new technologies etc - a vital vital thing in this modern world of symbiosis between humans-and-technology (and the good and bad in both)
Erm, i've got mates who work in AI, and here's a few titbits:
-we're no where close to any sort of AI at the moment. At first, in the seventies, they thought they'd have speaking/communicating robots within a few years (coz they thought they had a perfect structure in "grammer" - duh -they obviously weren't english teachers
). As it is, they're only just coming up with programs now that can pass the Turin test (i.e. can fool an audience into not knowing whether they're communicating with a human or a computer). But these things are no way close to consciousness. Another example of looking at emergant phenomenon without understanding either the cause or the process of emergance.
-there's no guarantee that we'd know HOW we'd programmed the robots if we did get them up and running, and hence probably wouldn't know how to moderate their behaviour. They'd almost certainly have to have both internally structured thought processes and the potential for darwinistic "growth"/learning etc (hence the Matrix etc). We'd have huge trouble controlling either of these things if we DID acheive them. An incredibly simplistic darwinistic/self-designing circuit was created about ten years back. It was/is fascinating stuff - and we
still don't know how it works. The process of emergance is entirely unclear. The designer just made a requirement that 100 logic circuits compete to survive, with the survival criteria being changing a constant tone/frequency they were receiving to a different constant tone etc. (which was established purely by the circuits starting to do it - i.e. they didn't [couldn't] decide exactly which tone would be produced.). Over-time, passing the adaptions of these "logic circuits" on to the next "generation", they arrived at a working chip, which "survives" (on its own terms
) within their broad aim.
The best bit about this experiment is: coz the scientist felt doing it with logic chips, and not actually in a computer model (which requires us to know all the "waveforms"/possiblities first - duh - stoopid/not accurate), was better....something very strange happened.....he noticed that only about 20% (i think) of the circuits were actually being used. But when he removed some of the un-used one, the chip stopped working!! i.e. they are assuming that the chip has established some form of time-loop technique using magnetism or some other physical property of the chip's physical components (humans would install a hefty "clock" system of course etc)
So we don't know how it works. And we can't entirely direct it. But it's FAR more efficient than anything we could consciously design currently.
You dig people? If we were to succeed in making a robot in this way (and it would be the most efficient way we currently know of) - it'd be out of our control in many ways. Wacking some "morality programs" in there would be both difficult and liable to get adapted around. (if the machines were able to self-adapt, without human aid. The technologies are growing all the time - but we're so far off currently, thank god
)
The Matrix has some damn good messages concerning this:
-be careful what you wish for (and recognise that you won't get exactly what you asked for)
-be careful of what you think you can control
-don't think scientific discoveries are fully under our control (or that when they are, we know what we're doing
)
So many old-school and new-school stories deal with the moralities and practicalities of this. From genies in lamps, via great sci-fi writers, to "with great power comes great responsability" comic-ethics (as we see in the Matrix etc - do you think i like those films too much?
)