It is not possible to have a conscious AI

Apr 25, 2016 11:12


The Chinese Room demonstrates that outward appearance of an understanding of meaning is by no means an actual indicator of understanding.
Bona fide understanding is a main feature of conscious thinking. If something is not conscious, it is not possible for it to understand.
What goes on inside the Chinese Room is an analog of programming. AI is ( Read more... )

Leave a comment

Re: Programming AI does not entail an understanding of the brain dragonlord66 May 18 2016, 09:12:08 UTC
Logical assertaion is how most of philosophy works, the thought experiments are just ways to provide a mental playground to test certain ideas.

Please define "subjective experience" in such a way that it would exclude an AI but also doesn't include any markers limiting it to carbon based cellular life forms. Also, you failed to provide any more than assertions, and any such proof can only come when there actually something that reliably passes the turing test without any social tricks to put people off a line of questioning or make them forgive certain slip-ups.

Suppose the person in the chinese room is accidently sent a request in english. And no your index is not functionally equivalent as it requires pre-processing of the chinese text before it gets to the room. The other thought that occurs is that because there is no understanding of the requests implicit in the setup of the chinese room, any AI that shows the ability to learn, remember, and initiate conversation is by definition not an chinese room. (e.g. as part of the chat on Thursday I say that I'm not going to be around tomorrow as I'm going to a party, then on Saturday the AI asks how the party went without me saying that I went to a party yesterday).

P-zombie: If you don't know which one is the p-zombie then by definition it's a clone not a zombie.

Brain in a Jar: The point is not that you know that you are a sentient, rather it's that other people know that you are a sentient. It is impossible to prove that we're not all living in a simulation of the world and that we aren't pre-programmed AI's playing some scenario out for some reason. Therefore we have to take it as an article of faith that what our senses tell us is true. The same is true of an AI, it is impossible to prove that an AI is sentient in the same way that it's impossible to prove that a human is sentient. Therefore we should start out with the assumption that it is, and then prove that it's not - and apply the same tests to a human (After all if a human can't pass the tests, then another sentient certainly can't pass them).

Duck test: No I'm not, I'm saying that your starting point must be to treat it as if it is X and then prove that it's not X. If you start from the other direction then all houses are actually just piles of bricks.

Falsifiability - Given that we're talking about thought experiments, why wouldn't they? Or how about there was a fire and the records got destroyed? Thought experiments by their very definition are not real and normally don't make sense in a real world scenario (who the hell would go to the trouble of making something that reacts to all stimuli as if it was a real person but can never understand what it's reacting to)

Reply

Re: Programming AI does not entail an understanding of the brain nanikore May 19 2016, 06:32:54 UTC
Logical assertion is a step in a series of steps in a logical proof. What you offered was a flat assertion in the form of:

An AI would have subjective experience, therefore An AI would have subjective experience.

Contrast with what I performed, which was inductive logic:

I am conscious.
I am a member of homo sapiens sapiens.
Homo sapiens sapiens is conscious.

I prefer to deal with that first before branching endlessly into other points.

Exactly what definition of the terms "intentionality" and "qualia" contain "markers limiting to carbon based life forms," whatever that means?

There can be more layers to the index to give rooms an appearance of learning. Google search algorithm learns, and it's not conscious. Apple Siri is to acquire a learning algorithm, and it's not going to be conscious either.

Good grief. An English Room would do the same thing as a Chinese Room. There is no processing, only a lookup table just like the Chinese Room. "Preprocessing" is red-herring unless you want to assert consciousness for things such as OCR. Captcha readers aren't conscious.

For example, the above paragragh would look something like this on a monitor inside the room:

47hf e3i2k d3 s29ff9 tufuj 3894 f98 34 d83j 4889 ad 48 34902 d9u0sk 38kd k2i85 283ki 48jil 983j 48k 38kd 38slk 37kf 38kl sj4

...and the response, when confronted with that string, would be yet another string of code into the keyboard that is looked up from an infinitely large cheat sheet (analog of a program). There is no comprehension in the lookup. Only a response that is expected as from the lookup. It's a thoroughly mechanical task, beginning with scanning all the entries starting with "47hf" and so on.

The so-called "clone" isn't conscious. That's why it's not a clone but a p-zed. You've built the thing, but you didn't go into how you did it.

There is absolutely zero point in other people knowing something or not. It doesn't change whether something is objectively true or not.

Is there a God, or isn't there? We don't know- Yet it still remains that either God exists or doesn't exist. We can't falsify God.

Did OJ Simpson kill his wife? People don't really know. The evidence, as argued by the defense, was that if the (blood-shrunken) glove doesn't fit, then OJ must be acquitted. There have been plenty of bad cases.

You can remove all proof of the p-zed being made, yet the fact would still remain that there is a p-zed.

Red herring, as I've stated so many times before.

There is already proof that Chinese Rooms are not Chinese speakers.

Reply


Leave a comment

Up