Wednesday, 20 August 2014

Artificial minds, artificial consciousness

I am reading this rather dated but highly entertaining and informative book on future technologies that may or may not cross from science-fiction into science-fact, and the eccentric people who conceived and promoted them up to 1990.  Things like space-colonies, cryogenics, and nano-tech, but also more interestingly to me things like uploading human minds onto silicon-chips.  I have to say, that although the book is 23 years old, the technologies it talks about still seem very far from realisation!  Some of the ideas are so technophilic, it's creepy.  Such as Hans Moravec's dream of uploading his mind into a "bush-robot", that is, a robot with a plethora of so many branching appendages, of every size and kind down to nano-manipulators, that it would have omnipotent control over its environment.  Who would want to be that robot?

A lot of these futurological and transhumanist ideas really do seem very close to religious ideas.  Conceiving of the self as a pattern of information derived from the mind that can be embodied in any suitable material or body from brain-flesh to silicon-chips, and have a continuing existence independent of any particular incarnation, at last makes sense of the concept of the soul.  It's transmission and deathlessness make sense at last of transmigration and eternal life.  Nanotechnology makes sense of omnipotence, artificial super-intelligence of omniscience.  The transhumanists come across as wacky, but it can hardly be denied that they make much more rational sense of their spiritual impulses than any old-fashioned metaphysical religion can say.

It is a very thought-provoking book, though, especially about ideas to do with artificial minds and intelligences.  (There is also a very interesting new book on the apparent risks of developing artificial super-intelligence.)  The particular questions it is provoking in me at the moment are about what a mind is, and whether an artificial mind could gain consciousness.  Here is an example of something provoking on the subject of uploading your mind:
The theory was, even though you're inside a computer you could still have exactly the same experiences you'd been having in your old body.  The only difference would be, now you'd be experiencing a simulation of reality rather than reality itself.  Or perhaps it would be more accurate to say that you'd be experiencing a different kind of simulation of reality, because as far as Moravec was concerned, that's all your original body ever gave you: a simulation, a mental construct that the brain put together out of the data conveyed to it by the senses.
Here's where my thinking is...

I have NetLogo on my computer.  It's free to download.  You can use it to simulate all sorts of "agent-based" scenarios and see emergent phenomena develop before your eyes.  It comes with lots of free scenarios to play around with.  There is a very effective butterfly-evolution game.  The game starts up by scattering a load of randomly coloured butterflies across a photo of a landscape.  Every time you "catch" a butterfly by clicking on it, that individual is removed, but one randomly selected member of the remaining butterflies spawns an offspring.  That offspring is mutated in colour a certain degree from its parent, and it spawns a certain distance from ityou can play with the parameters.  What happens is, you catch the obvious butterflies easily, and in doing so spawn new offspring that are close in colour to the less obvious bugs.  After a short while, you really cannot see any butterflies at all.  You have caught all the visible ones and the rest are hiding, having inherited their camouflaging colour from the survivors you missed out.

Here they are, obvious to start with:


Here they are, evolved into camouflaged invisibility:


Here's where they are, highlighted out of their hiding-places by the program:


It's fun.  But the point I want to make about them is, that this is real evolution.  It's not a simulation.  There is replication (offspring from survivors), there is mutation (of colour), there is selection (by predation).  There is a genetic code (a string of code determining colour) that is replicated with mutation.  I don't think this is a particularly controversial claim.  The genetic code for butterfly coloration undergoes actual evolution.  It's not a simulation.  (What would a simulation of this process involve?)

Is it artificial life?  Is it life?  If not, why not?

Anyway, what the computer-butterflies definitely don't have is any mind or intelligence, right?  The butterflies do not absorb, process, or act on any information.  They just disappear or spawn.  It's only their removal and replication that process information.

But then, why not conceive of the set of butterflies as an artificial super-organism, deathless (until you shut it down), each butterfly a cell?  Is this not life?  Why not?  It retains itself in existence from moment to moment by maintaining a certain disposition of ones and zeros.  Is that so different from holding onto life and existence by maintaining a certain disposition of homeostatic and thermodynamic properties in the cells and space of your body?  I don't know that it is.

On to minds.  There are other games with bugs that each do information-processing and react to their environment.  These ants (red) come out of the nest (purple) searching for food (blue).  When they are carrying food they emit pheromones (green) that fade over time.  When an ant walks over the pheromone plume, he keeps walking within the plume, often finding the food as he does so.  It is a very simple version of how ants recruit each other to exploit food-sources.


In this simulation, there is clearly mind at work, there is information-processing.  Unlike the butterflies, there is information-processing performed by the program attached to each ant: when it senses pheromone it adjusts its movement.  My question is, is that a simulated mental process, simulated mindor is it mind itself?  There is detection of stimulus (by a computer of a pattern embodied in digital ones and zeros) and reaction (execution of a different pattern than otherwise).  What more can you ask of a thing for it to be a mind?  In very basic, minimal form, this is a replica of the information-processing performed by an ant's mind, embodied in electrical and chemical changes in its brain.

Not a simulation, a replica.  Think back to the quotation about Moravec's thinking:
Or perhaps it would be more accurate to say that you'd be experiencing a different kind of simulation of reality, because as far as Moravec was concerned, that's all your original body ever gave you: a simulation, a mental construct that the brain put together out of the data conveyed to it by the senses.
If an ant's brain is an electro-chemical computer that processes and reacts to sensory input, then what fundamental difference does it make whether those inputs are stimulated by chunks of actual food or ones and zeros interpreted by the programme as chunks of food?  Either way, the brain receives sensory input, processes it, outputs reaction.  Even if the food is simulated, the input and output are the same thing.  The environment may be simulated, but the mind is replicated.  The mind is a real one.

Imagine scanning an ant's brain to gain total knowledge of the disposition and connections of its neurons, then instantiating them in silicon, then hooking that silicon brain back up to the original ant's antennae and legs and jaws.  That would be a real mind, right?  It would be receiving sensory input from the world, and outputting commands to the motor functions.  The ant would go on living its life as before.  So what difference then whether the stimulation of this silicon mind is provided by the real world or a simulated one?  None.

Just as an ant's brain would not cease to embody a real mind just because you stimulated it with electrodes instead of reality, so, by the same token, a silicon mind would not cease to be a real mind because it was stimulated by ones and zeros.  In which case, I think it is quite plain that the computer program provides each ant with a mind instantiated in silicon.  Not a simulated mind, but an artificial mind, which is a kind of real mind.

Now, Is there anything a human or higher-animal mind can do that an artificial mind cannot?  An artificial mind can know, learn, adapt and react.  It can run in different modes which parallel emotional states: imagine a chess computer that goes all-out to capture its opponent's queen if it loses it own.  Is this not the information-processing equivalent of the emotion of desire for immediate revenge against the opposing queen that might motivate a human player?  It might even be an irrational show of emotion that worsens the computer's chance of winning the game in the end.  Why not programme a computer program with any number of such "emotional" modes?

I think the two most prominent doubts over whether computers could perform like a human mind would concern creativity and consciousness.  Creativity I don't want to get into here, because I'm not best sure just what it is.  So consciousness it is.  I mean the subjective feeling possessed by a mind of what it is like to be a mind, or at least the subjective feeling of what the mind presents to the conscious part of itself.  Something like that.

My point on this is simple.  If you are a philosophical naturalist, and believe that the human mind is completely embodied in the human brain, being an information-processing organ that operates according to the laws of nature, then you know that an electro-chemical information-processing machine is capable of producing consciousness.  Something about the material structure of the brain gives rise to consciousness.  Therefore, it follows, any artificial brain that had an equivalent structure, no matter in what kind of material it was instantiated, would also give rise to consciousness.  As long as that equivalency could indeed be realised.  And if it is the information-processing architecture of the brain that matters, rather than, say, something about organic cells, for producing consciousness, then any equivalent information-processing architecture, instantiated in man-made electronics and chemistry, will also produce consciousness.

So if you build a computer that does whatever it is the brain does that gives rise to consciousness, then you will have created artificial consciousness (obviously).  Since the brain can do it, I find it unlikely that a computer cannot do it in principle.  In which case, it seems plain that, if science could continue to make discoveries for long enough, it would be possible to construct not just artificial, real minds, but artificial, real consciousness.

Finally, if a computer can really be conscious, and if Moravec is right in equating the simulation your brain produces of the real world and the simulation it would produce of an artificial world inside a computer, then we have to accept that a mind living inside a computer simulation could be living an equivalently real life to the rest of us, and consciously.  It just would not be living it in the real world.

Like me, somebody will say... ;)

No comments:

Post a Comment

Please make your comment evidence-based and polite.