On artificial intelligence

Dec 27, 2004 13:29


Shall I wade straight in?
I believe I shall.

I'll start with two sets of definitions, one for intelligence:

intelligence ( P ) Pronunciation Key (n-tl-jns)
n.

The capacity to acquire and apply knowledge.
The faculty of thought and reason.
Superior powers of mind. See Synonyms at mind.

And one for knowledge:

knowledge ( P ) Pronunciation Key (nlj)
n.
The state or fact of knowing.
Familiarity, awareness, or understanding gained through experience or study.
The sum or range of what has been perceived, discovered, or learned.

So, nice, I've copied and pasted some information from www.dictionary.com, I'm sure you're very proud of me. What do these definitions matter?
Intelligence is derived from the state of knowledge and understanding and the process of reasoning over that knowledge. Knowledge is acquired from the process of experience or learning. All experience or learning occurs from the act of interaction and the changes caused by that interaction.

Interaction, leads to change, we call that change experience, that experience leads to knowledge, which can be measured as a change in behaviour.

(abstract reasoning is covered by this chain of events as the interaction of an entity's brain with itself)

To further reduce you could say
Interaction leads to experience leads to change in behaviour
or
Interaction leads to internal change, leads to external change

Where does this leave intelligence?
It leaves it in the position of being a metric we use to measure the complexity of behaviour and the effects of interaction on behaviour, i.e. how experience changes us.

So how can you create an artificial intelligence?
Well, what do you mean?
"How can you create an artificial entity with a domain of interactions with an environment that has behaviour of measurable complexity and whose behaviour is altered by it's interactions with its environment"?

Well, if that's what you mean, then we already have. If you want some references I'll dig up some papers but, as a thought experiment, imagine a minimal simulation or experiment which contains an entity that matches the above criteria. It's not difficult.

Or did you mean "How can you an artificial human being, with the same complexity of behaviour, ability for adaption and understanding of its world as a human being".

Well, if that's what you meant then we can't.

Why not? Well, we can clearly make a machine or arbitrary complexity, that's no issue. I also have no doubt that we can create a machine with an equal amount of adaptionality as a human being. The only reason why that should not be possible is if you believe we are more than just machines or if you think that there is something special about the chemistry we are constructed from. Something so special that you couldn't model it in a computer of "suitable power". If you do, then I'd like to ask why (beyond spouting "Quantum processes, it's the nano-tubules guv'nor and we can't simulate them, honest", which always feels like an argument of the form "but we must be special else...else...I don't feel special and I like feeling special, can't we just pretend we have souls?").

The problem I have is that of "understanding". Understanding is acquired from experience and experience is formed from interactions with an environment and the changes those interactions cause. By that definition your understanding is regulated by the structure of your being. Of your inputs, internal processes and outputs (as much as you like to arbitrarily delineate such processes from each other or from the act of "being"). Change the structure and you change the domain of interactions and domain of perturbations (how you interact and how you are changed by such interactions). So your understanding is literally that, it is "your understanding". My understanding of "trees" is caused by my every interaction with entities I choose to label "tree". Your understanding of trees is qualitatively different from mine by the differences in our structures and the necessary differences in our experiences: being at different points in space-time for an otherwise similar experience, for instance. A dog's understanding of tree is, again, qualitatively different from our understanding, as it is from every other dog's understanding. There are intransients, which can be measured purely by the intransients in behaviour, as no other metric can tell us anything of any certainty as to the similarities or differences in our understanding, which is why anyone who thinks about other beings for any amount of time comes to the conclusion that they cannot be certain of the realness of anyone but themselves.

So, for a computer to have understanding of trees, it must have experience of trees, which is mediated by its being and defined by its structure and its domains of interactions and perturbations. To have human understanding you need human inputs, human thought processes, human outputs, made from the same chemistry as a human being, else it is computer inputs, computer thought processes, computer outputs forming computer understanding (and only for that specific computer).

So human understanding is impossible for a computer, without that computer being of form and function as a human being, which would get us absolutely nowhere.

In the end, we both have artificial intelligence and we will also never achieve it.

To apply this paradigm to problems in AI, such as John Searle's Chinese Room Problem: the man in the room understands Chinese to the point of understanding his environment, of the inside of the room, filled with Chinese symbol input, a book of thought processes and Chinese symbol output. He does not understand Chinese as a Chinese speaker, but he understands it by his interactions with the symbols of it, in a way no Chinese speaker would.

That a person talking to the room might measure the intelligence of the room by their interaction with it and come to the conclusion that the room speaks Chinese in the same way that he speaks Chinese is unimportant. I no more know that someone replying to this post understands the ideas in the same way I do, in fact I have guaranteed that they do not (to some degree of understanding). All I guarantee is that their kind of behavioural response shows some kind of understanding that is different to mine but has some intransients based on the similarity of the behavioural intransients. And that's all any of us can guarantee, ever.

And that's my thinking over for today.

Mike
Previous post Next post
Up