The purpose of this post is to explore the requirements to recreate a living, conscious human being on a computer, as opposed to running a functional model of a brain in software.
Prompted by Greg Egan's "Permutation City", which I'm currently reading.
I appreciate you taking time to critically examining my argument.
A neuronal activity consists largely of neurons firing, spikes propagating, and synapses forming/changing. Those things can happen either as a result of external (sensory) input coming into the brain, or feedback loops in the brain itself.
We can capture the state of the brain at any particular moment by recording all relevant parameter values. These parameters can be plugged into a functional model of the brain, together with any input signals. The model will allow us to predict (calculate) how the system is going to change, if started with those initial parameters (the real brain changes due to laws of physics, for example, if the electrical potential value in some neuron is large enough, that neuron is likely to fire; it also changes as the input signals change). The system uses analog signals, and is not governed by a global clock, so the change will be analog (gradual). There is no "next state" to speak of - the state is continuously changing. We can make "snapshots" of a real, living brain at different times, or we can calculate the state of the brain at those times. If the results are identical, we have a good model.
Calculating the state of the brain at successive points in time, given initial parameters, sensory input, and a functional model, can be considered to be an active, ongoing brain simulation. Calculating those states frequently enough can let us construct a pattern of neuronal activity, which we can then decode as specific thoughts, feelings, and motor commands intended to generate some actions. We can have a robot perform the actions, and this robot will appear alive and even "conscious". However, there's no living "being" controlling this robot. The brain state calculations could, in principle, be done on paper, because it's all just number crunching*. The calculated numbers could tell us what the real person would feel like, if this was a real person. But it's not. It's a description of a real person - a mathematical model with a bunch of parameters.
Such a robot would already be pretty impressive, but how do we create a "living being"? For that, we need to switch from performing calculations to running physical processes. We need to build a system where processes are happening "on their own". Instead of calculating the "next state", we need to let the system run so that any "next state" would develop naturally. Instead of calculating a snapshot at a particular time, we should have a system that has a continuous physical state at all times.
It's not clear how accurately we need to imitate the relevant physical processes in hardware, or if it's possible to use some software abstractions. For example, can we represent synapses as numbers stored in memory, or must they be actual physical devices, such as memristors? Do we need to generate analog voltage spikes on dedicated wires, or can we use digital data packets on a switched network between neurons?
I tend to think that as long as we recreate the movement, transformation, and storage of important information throughout the entire system, we have a living being.
*Compare with
Searle's Chineese Room Experiment.