It is not possible to have a conscious AI

Apr 25, 2016 11:12


The Chinese Room demonstrates that outward appearance of an understanding of meaning is by no means an actual indicator of understanding.
Bona fide understanding is a main feature of conscious thinking. If something is not conscious, it is not possible for it to understand.
What goes on inside the Chinese Room is an analog of programming. AI is ( Read more... )

Leave a comment

Re: Programming AI does not entail an understanding of the brain nanikore May 15 2016, 02:36:02 UTC
Okay. It seems that the contention has been narrowed down to one single issue- that of complexity.

The argument goes both ways. When we traverse downwards in the animal kingdom, how many computers does it take to simulate a cnidaria?

Also, a simulation of the symptoms of what could be construed as a conscious being is of course by no means in itself a conscious being. The separation between actual and simulation in this case is two-fold:

1. Separation between symptoms of consciousness and consciousness (Chinese Room and p-zombie arguments) e.g. The supercomputer which simulates a rat brain isn't conscious.

2. Distinction between simulation and simulacra. Searle somewhat indirectly makes this point in his biological naturalism. This goes back to the question of exaustivity once again. There is no exaustive account of the brain, therefore there is no exaustive simulation. What is deemed a simulation is then a simulacra.

The above two points support each other. Even if one ignores the Chinese Room and p-zed arguments, there's the question of exaustivity. Even if one ignores the question of exaustivity, there's the Chinese Room and p-zed arguments.

Reply

Re: Programming AI does not entail an understanding of the brain dragonlord66 May 15 2016, 10:15:57 UTC
At this point you're going to have to define conscious, as without knowing what definition you're working on. As to my mind a rat isn't conscious in a sentience sense. I believe that there are tests for self awareness already (such as recognising yourself in a mirror).

I don't know how many computers it takes to cimulate a cnidaria as I've not seen any news articles of someone doing that.

1) equally the brain that our mind is running on isn't conscious, it's the utilisation of that brain that gives rise to sentience. Also, I again ask, how do you prove that something isn't sentient without also proving that a large number of natural humans that we all agree are sentient aren't. The Chinese room doesn't actually say that there couldn't be a chinese speaker in there that understands the reponses that it's giving, or that the english speaker hasn't learned how to read and write chinese, just that from an outside point of view you can't tell the difference. We are also coming back to the brain in a jar problem again. Is is possible to prove that you are not just a simulation?

2) "There is no exaustive account of the brain" - What happens when there is? I've already shown that they are able to simulate a portion of a rats brain, so what happens when they get to the point that they can simulate a whole human brain?

The other thing to bear in mind is that p-zombies don't actually support themselves, as they are logically flawed by requiring absolutly identical responses to stimuli as a non-zombie.

Also the chinese room isn't a proof that an AI cannot exist, rather it is "The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have." (source: http://plato.stanford.edu/entries/chinese-room/)

What he is saying is that if your computer program relies on rules and stored knowledge to appear sentient then it isn't. It doesn't say that we can't write a computer program that learns itself. He then goes on to reject the idea that you can apply the same chinese room argument to natural humans using an argument that boils down to - because faith (we're human therefore we understand). In my opinion he has no valid come back to the brain simulator, other minds, or intuition reply as they all boil down to, we're human and a computer isn't. Hell even Turing was aware of the fact that human bias is a problem and so wrote his test (which is supposed to last a few hours) such that the person conversing wouldn't know whether there was a computer or a human on the other end.

My opinion is that the chinese room argument was valid in it's day, but had now appreciation of the advances that we have now undergone because it was an out of context problem. However as our understanding of brains and neurological functions continues to improve, as does our ability to mimic those functions in silicon, it is fast (in the next 500 years) coming to the point where we will be able to build an actual artificial brain. At that point there will be no functional difference between computer intelligence and human intelligence. Personally I think that we will have a self aware computer before we have an artificial brain.

BTW you still haven't come back with a proof about you not being a brain in a jar (see first response)

Reply

Re: Programming AI does not entail an understanding of the brain nanikore May 15 2016, 11:08:20 UTC

Again, you're focusing on function, being a card-carrying functionalist.

Consciousness is not a function. Something that handles syntax without comprehension is not understanding anything. This goes back to my original post. The argument hasn't changed one bit.

You've repeatedly asked the same questions in your subthread as another user asked and this time it's no different.

As I've said to user esl in the other subthread,

A conscious entity, i.e. a mind, must possess

1. Intentionality
2. Qualia

Exactly which logical errors do the p-zombie argument contain? They provide indistinguishable output just as Chinese Rooms do. I call them "consciousness rooms".
There is no full simulation because it would be simulacra as per my previous reply. Not only that, a supposed full account contains no guarentees against underdetermination.
Brain in jar is a red herring for this topic for reasons I've stated in an earlier reply.
Whether something "learns" is irrelevant. Machine learning programs are not conscious- to say that it does would be relying on behaviorism.

Reply

Re: Programming AI does not entail an understanding of the brain dragonlord66 May 15 2016, 12:49:40 UTC
How do you know you have intentiality or qualia? How do you know the people around you do?

The answer is that you don't you take it on faith that you do. This is because the biological signals that we receive from our senses get translated into something our brains can interpet, this is then abstracted into something different to the raw signals that we received.

Equally why is it not possible for a computer to do the same thing? Or do you think that we as biological systems are so unique that it could never exist in silicon? Why can't a computer have the thought/realisation "I think therefore I am" which the bedrock that everyone works on (even if they don't consiously realise that they are)

As for "Consciousness is not a function", I'm not arguing that it is, I'm arguing that it necessarily arises out of a certain level of complexity in the underlying system. It is an abstraction that exists because of and despite the complexity of the underlying systems, and that a sufficently complicated computer system that is programmed to learn, will also achieve a similar thing. Before you start again on the idea that the human mind is above function, don't forget all the stroke victims that have lost their memories (tiny change in the functions, massive change in the abstraction we call the mind) or have been paralised, or some other thing. Also consider that a newborn baby could not pass any of the tests that various philosophers ascribe to being a thinking being. Does that mean that new born babies are not people?

P-zombies - A p-zombie is a human shaped creation that does not comprehend the input that it is given however reacts in such a way that it appears to. If you poke it, it will flinch as if you poked a real person, if you give it scathing remarks it will do the same. But apparently it has no comprehension of what it's doing. This is logically inconsistant, as if it had no comprehension it couldn't react, and it reacts, therefore it must comprehend. If a p-zombie shows the full range of reactions of a real person, ipso facto they must be a real person and should be treated as such. Otherwise you get into very dark moral and ethical waters, because the reverse of the question becomes - This real person has been introduced as a p-zombie, and the person that setup the experiment has been incapacitated, prove that they are not a p-zombie. Expanded out to, This group of people has been reveiled to all be p-zombies, prove that they're not. Therefore logically if something appears to be human to all scans and diagnostics it must be human, because if it isn't then every human must be a p-zombie.

As for why the brain in a jar is appropriate, and you're still dodging the question, lets say I sedate you while you're sleeping and then transplant your brain into a jar. Hook you up to a robot body and then wake you up. I then introduce you to your "brain" which is a massive computer in a room, and tell you that the simulation went well, we're going to debrief you in 48 hours before deactivating you and using your parts to make a better simulation. Is there anything that you could do to prove to these people that you were not a simulation? After all they control all access to your senses, and can "switch you off" any time they want just by administering a sedative.

So, how do you prove that you're not a brain in a Jar, or not an AI.

I suspect that if you think about it, you can't, you can only come back to that article of faith "I think therefore I am"

Reply

Re: Programming AI does not entail an understanding of the brain nanikore May 16 2016, 08:28:24 UTC
Do you know exactly what it's like to be me? If you do, then you can deny subjective experience.

It doesn't make any sense to either deny your own consciousness or that of others around you. How do you proof that you're not imagining everything? You can't and it's a useless question. Global skepticism leads to nowhere.

What level of complexity, and in what? That's hand waving. Space launch systems are immensely complex. Are they conscious?

I've already stated the reason that renders learning irrelevant.

Newborns and stroke victims still have subjective experiences- Intentionality and qualia.

A Chinese Room reacts without comprehending. So do P-zeds. It puts up a "conscious act" just as a Chinese Room puts up a "Chinese language act". Nothing logically fallacious about it...

The proof is in the pudding. Those who put me in the jar would know the truth of the matter. There is no point to any proof. It's as if an absence of the proof of a state of the universe changes the nature of said universe. Once again you're repeating the same basic line of questioning as another user in another subthread:

(in reply to redslime)
================
You are correct that we could not distinguish between a p-zombie and a bona fide sentient being.

However, that distinction is prima facie, under the condition that we do not know the origin of said p-zombie.

If we know a p-zombie to be a p-zombie, then its identity as a non-sentient thing would be established.

If we know, for example, that some thing is programmed instead of a result of a life process (e.g. birth, which instills native intelligence instead of artificial intelligence) then we would know that it is not sentient.

None of us would demonstrate such self-deception and other-deception as to knowingly mis-categorize, at least I hope not.
================

As I've said multiple times now, such line of questioning is red herring.

Reply

Re: Programming AI does not entail an understanding of the brain dragonlord66 May 16 2016, 09:39:33 UTC
We are talking about orders of magnitude more complexity than even the space launch system. Each neuron can make tens of thousands of connections in the brain and there are more than 200 billion neurons in the human brain (source: https://www.sciencedaily.com/releases/2010/11/101117121803.htm )

An AI would also have subjective experiences, in fact AFAIK all mammals have subjective experiences, and there's a large draft of mammals that have the mental capacity of a 2 year old. But we don't call them sentient.

A Chinese Room starts out by reacting without comprehending, but there is nothing to say that the person inside cannot learn Chinese through dealing with all the requests.

P-Zombie
---------------------------------------
I make a P-Zombie that is an exact facsimile of you, even down to DNA analysis and arrange for you to be taken into hospital. While you're under sedation, I copy all of your knowledge and install it into the P-Zombie so that it has all of your reference material to respond to. You wake up next to the p-zombie 1/2 hour later.

I know that one of you is the P-Zombie, but after you've both been discharged for a day, how could I tell?

Brain in a Jar
-----------------------------------
You're still missing the point. If you can't prove that you're a real person. Then surely the onus should then be on proving that you are not a real person. This is the same theory that is used in law with the presumption of innocence. If we didn't have this presumption of innocence it would be all but impossible for someone to prove their innocence. The same is true of Artificial Intelligence, if we don't have the presumption of intelligence then we can never prove something is intelligent. This is the point of the Turing Test, it's supposed to be A B testing with known humans and a new AI. If the AI can hold a conversation with a person over the course of a couple of hours, then it passes the test. (Turing Test : http://plato.stanford.edu/entries/turing-test/ )

Duck Test
-----------------------------------
If it looks like a duck, walks like a duck, and quacks like a duck, we should treat it like a duck. The same test should be applied to AI, rather than trying to prove the reverse.

Falsifiability
-----------------------------------
How can something fail your tests? P-Zombies - It is impossible to prove that something isn't a P-Zombie, therefore the proof should be that something is a P-Zombie. Chinese Room - again it's impossible to prove that something isn't a Chinese room, therefore we should try to prove that something is. How do your arguments stack up with falsifiability? Is it possible to prove that the assertion is false without prior knowledge.

Reply

Re: Programming AI does not entail an understanding of the brain nanikore May 18 2016, 06:40:27 UTC
Where does the bottom stop? Billions? Tens of thousands? There is no clear delineation anywhere. In contrast, based on the definition of what makes an organism belong in the animal kingdom, we can make delineations. Lack of delineation would make a definition incoherent.

You're stating that they would, in the face of my proof that they wouldn't. You've attacked my proof but the proof you're offering me now is one of proof-by-assertion.

The Chinese Room argument had rejoinders to objections offered by Searle himself. I'm not sure if what I'm stating below falls under one of his, but here it is.

The person inside the Chinese Room is looking at nothing but cues to look up another cue. That's what a machine does. There is no possibility of learning because no meaning penetrates the room in the first place- All meaning reside on the outside of the room.

Suppose that the person inside the Chinese Room receives a Chinese text which is automatically converted to an index on its way in:

-When 1F is displayed on the inside monitor, type R4 on the keyboard
-When W% is displayed on the inside monitor, type 4# on the keyboard
...and so on.

The index is functionally identical to the lookup mechanism in the original proof, except the POV isolation has been clarified.

Once the person types the index response, some corresponding Chinese text is displayed on the outside of the room.

Because of the perfection of the index, the Chinese speaker on the outside of the room is fooled into thinking there is a live Chinese speaker inside the room.

There is zero comprehension. The index may "look" and feel organic externally, but inorganic in its internal operations.

P-zombie: You can't. But the important thing is you know one is still the p-zed. I hope you realize the significance of this. What exactly would be the point of such self-deception and other-deception?

Brain in jar: I have already said this before and I will say it again. Proof or the lack of proof of the state of universe still does not change the state of the universe. It's like saying that God must exist (or God must not exist) simply because you can't prove it... Since it's useless for each of us to deny our own consciousness, the issue is a red herring.

Also, there is a distinction between intelligence and consciousness. Of course we already have artificial intelligence. However, we can never have artificial consciousness.

Duck Test - You are conflating the concept of treating something as X, and something BEING X.

Falsifiability- Why would someone make P-zeds, and then destroy all records of their making? Again, and this is a serious question, what exactly is the point of such self-deception and other deception? (see topic "P-zombie" above)

Reply

Re: Programming AI does not entail an understanding of the brain dragonlord66 May 18 2016, 09:12:08 UTC
Logical assertaion is how most of philosophy works, the thought experiments are just ways to provide a mental playground to test certain ideas.

Please define "subjective experience" in such a way that it would exclude an AI but also doesn't include any markers limiting it to carbon based cellular life forms. Also, you failed to provide any more than assertions, and any such proof can only come when there actually something that reliably passes the turing test without any social tricks to put people off a line of questioning or make them forgive certain slip-ups.

Suppose the person in the chinese room is accidently sent a request in english. And no your index is not functionally equivalent as it requires pre-processing of the chinese text before it gets to the room. The other thought that occurs is that because there is no understanding of the requests implicit in the setup of the chinese room, any AI that shows the ability to learn, remember, and initiate conversation is by definition not an chinese room. (e.g. as part of the chat on Thursday I say that I'm not going to be around tomorrow as I'm going to a party, then on Saturday the AI asks how the party went without me saying that I went to a party yesterday).

P-zombie: If you don't know which one is the p-zombie then by definition it's a clone not a zombie.

Brain in a Jar: The point is not that you know that you are a sentient, rather it's that other people know that you are a sentient. It is impossible to prove that we're not all living in a simulation of the world and that we aren't pre-programmed AI's playing some scenario out for some reason. Therefore we have to take it as an article of faith that what our senses tell us is true. The same is true of an AI, it is impossible to prove that an AI is sentient in the same way that it's impossible to prove that a human is sentient. Therefore we should start out with the assumption that it is, and then prove that it's not - and apply the same tests to a human (After all if a human can't pass the tests, then another sentient certainly can't pass them).

Duck test: No I'm not, I'm saying that your starting point must be to treat it as if it is X and then prove that it's not X. If you start from the other direction then all houses are actually just piles of bricks.

Falsifiability - Given that we're talking about thought experiments, why wouldn't they? Or how about there was a fire and the records got destroyed? Thought experiments by their very definition are not real and normally don't make sense in a real world scenario (who the hell would go to the trouble of making something that reacts to all stimuli as if it was a real person but can never understand what it's reacting to)

Reply

Re: Programming AI does not entail an understanding of the brain nanikore May 19 2016, 06:32:54 UTC
Logical assertion is a step in a series of steps in a logical proof. What you offered was a flat assertion in the form of:

An AI would have subjective experience, therefore An AI would have subjective experience.

Contrast with what I performed, which was inductive logic:

I am conscious.
I am a member of homo sapiens sapiens.
Homo sapiens sapiens is conscious.

I prefer to deal with that first before branching endlessly into other points.

Exactly what definition of the terms "intentionality" and "qualia" contain "markers limiting to carbon based life forms," whatever that means?

There can be more layers to the index to give rooms an appearance of learning. Google search algorithm learns, and it's not conscious. Apple Siri is to acquire a learning algorithm, and it's not going to be conscious either.

Good grief. An English Room would do the same thing as a Chinese Room. There is no processing, only a lookup table just like the Chinese Room. "Preprocessing" is red-herring unless you want to assert consciousness for things such as OCR. Captcha readers aren't conscious.

For example, the above paragragh would look something like this on a monitor inside the room:

47hf e3i2k d3 s29ff9 tufuj 3894 f98 34 d83j 4889 ad 48 34902 d9u0sk 38kd k2i85 283ki 48jil 983j 48k 38kd 38slk 37kf 38kl sj4

...and the response, when confronted with that string, would be yet another string of code into the keyboard that is looked up from an infinitely large cheat sheet (analog of a program). There is no comprehension in the lookup. Only a response that is expected as from the lookup. It's a thoroughly mechanical task, beginning with scanning all the entries starting with "47hf" and so on.

The so-called "clone" isn't conscious. That's why it's not a clone but a p-zed. You've built the thing, but you didn't go into how you did it.

There is absolutely zero point in other people knowing something or not. It doesn't change whether something is objectively true or not.

Is there a God, or isn't there? We don't know- Yet it still remains that either God exists or doesn't exist. We can't falsify God.

Did OJ Simpson kill his wife? People don't really know. The evidence, as argued by the defense, was that if the (blood-shrunken) glove doesn't fit, then OJ must be acquitted. There have been plenty of bad cases.

You can remove all proof of the p-zed being made, yet the fact would still remain that there is a p-zed.

Red herring, as I've stated so many times before.

There is already proof that Chinese Rooms are not Chinese speakers.

Reply


Leave a comment

Up