It is not possible to have a conscious AI

Apr 25, 2016 11:12


The Chinese Room demonstrates that outward appearance of an understanding of meaning is by no means an actual indicator of understanding.
Bona fide understanding is a main feature of conscious thinking. If something is not conscious, it is not possible for it to understand.
What goes on inside the Chinese Room is an analog of programming. AI is ( Read more... )

Leave a comment

redslime April 25 2016, 19:53:44 UTC

Likewise, humans are not to demonstrate consciousness to other humans. You are most likely a bot pretending to be human.

Reply

nanikore April 26 2016, 05:13:22 UTC
You are correct that we could not distinguish between a p-zombie and a bona fide sentient being.

However, that distinction is prima facie, under the condition that we do not know the origin of said p-zombie.

If we know a p-zombie to be a p-zombie, then its identity as a non-sentient thing would be established.

If we know, for example, that some thing is programmed instead of a result of a life process (e.g. birth, which instills native intelligence instead of artificial intelligence) then we would know that it is not sentient.

None of us would demonstrate such self-deception and other-deception as to knowingly mis-categorize, at least I hope not.

Reply

esl April 26 2016, 20:35:08 UTC
You seem to believe that the programming language used to program biological machines (DNA encoding) is somehow more powerful than any of the general purpose programming languages we use to program general purpose computers.
Is there a reason you think that?

Reply

nanikore April 27 2016, 09:47:26 UTC
Research into the nature of consciousness still haven't had the final say as to the exact nature of it yet, so I'm not ready to treat consciousness as a part of a machine. We can view the human body as analogous to a machine, but not the mind. That's for another discussion, but I'm pointing out the inbuilt assumptions of your question. I don't treat the mind as such.

There is another categorical difference, that of performance. I'm not thinking of the issue in terms of performance; I'm concerned whether x is either conscious or not conscious, and not whether x is somehow more or less powerful than y.

Reply

esl April 29 2016, 07:29:21 UTC
If we know a p-zombie to be a p-zombie...

How can we possibly know that?

If we know, for example, that some thing is programmed... then we would know that it is not sentient.

Why? Why can't a "programmed thing" be sentient, and even conscious? You just said that we don't know the nature of consciousness, yet you immediately claim that: "We can view the human body as analogous to a machine, but not the mind." Why not?

Reply

nanikore April 29 2016, 08:23:10 UTC
We know a p-zombie from its history as a constructed automaton. Of course, you can fake a history of a person with things like birth certificates, but seriously- What exactly would be the point of that?

None of the software programs we see today, including ones driving bottom-up AI, are conscious. Since consciousness does not derive from complexity, we're not going to get AI by creating more complex programs. Expert systems in use today possess the usual artificial intelligence but no consciousness. Some chatbots score upwards of more than 50% in Turing evaluations but they're not conscious at all. It would be the same even if they score 100% just like a theoretical Chinese Room does.

If complexity gives rise to consciousness, then cell phones today would be conscious.

A8 processor in Apple iPhone 6: 2 billion transistors
House mouse: 71,000,000 neurons

Reply

redslime April 30 2016, 01:11:04 UTC
I'm sympathetic to your beliefs here. However, I don't believe you've presented a convincing argument.

Reply

nanikore April 30 2016, 09:50:02 UTC
In which ways could the argument be improved, and the missing steps be filled?

Reply

esl May 2 2016, 22:28:00 UTC
1. Why can't a "constructed automaton" be conscious? Do you believe that only a biological human being can be conscious? If so, why?

2. None of the software programs today ... are conscious.
It does not mean none of software programs tomorrow won't be conscious.

3. Complexity is not defined by a number of parts only. The structure, and interactions between parts also matter. Most experts would agree that a mouse brain is more complex than any processor today.

p.s. Just as we don't have any good definition of consciousness, we don't have a good definition of complexity. Until we do, I'm not sure if we will be able get very far in this discussion.

Reply

nanikore May 2 2016, 23:11:51 UTC
1 & 2: How much programming have you done? I'm asking because it'd be much easier for us to get on the same page if you are familiar with the process of coding. Otherwise, we'd be talking past each other. You won't believe certain things about computers and programming until you've had experience firsthand.

3. Which experts, specifically, and exactly which statements? Consciousness do not arise out of complexity of any kind. Do you know how complex is, for example, a space launch system?

I have stated the criteria for consciousness above in the other subthread.

A conscious entity, i.e. a mind, must possess

1. Intentionality
2. Qualia

Reply

esl May 2 2016, 23:33:38 UTC
Yes, I've done lots of programming. At the moment, I'm coding a large neural network for text processing.

I'm fairly confident that any expert on mouse brains, or any expert on computer chips, will agree with me that a mouse brain is more complex than any modern computer chip, by any definition. If you doubt it, I encourage you to reach out to any experts you would trust.

Consciousness do not arise out of complexity of any kind
There's no evidence to support this statement. However, the evolution provides a clear evidence to the contrary: whatever the consciousness is, it is only present in the most complex organisms.

1. Intentionality
2. Qualia

I'm sorry, but these words don't mean much to me. Care to explain in plain English?

Reply

nanikore May 3 2016, 01:07:53 UTC
Then you would know that a computation machine does absolutely nothing without outside instructions ( ... )

Reply

esl May 3 2016, 20:52:58 UTC
a computation machine does absolutely nothing without outside instructions.

A human being does absolutely nothing without outside instruction. We are programmed through evolution. What makes you think a human being is anything more than a computational machine?

I had to appeal to authority. I know enough about brains and chips to tell you that a mouse brain is more complex than any chip. If you don't believe me, ask someone you trust.

The problem with the rest of your comment that it can equally be applied to a complex enough machine. You still haven't provided any evidence that whatever is going on in our brains is anything more than a complicated pattern of doorknob turns.

Also, I recommend you look at the latest advances in machine learning. You will see that relatively simple models already have a capacity to acquire pretty good representations of objects, just by processing enough data.

As someone else has mentioned, your arguments are weak.

Reply

nanikore May 4 2016, 01:46:28 UTC
Oh please. Babies have innate intelligence from birth ( ... )

Reply

esl May 4 2016, 02:11:22 UTC
I am bored with this discussion, so let's pause it. Meanwhile, I recommend you look at the latest machine learning models, eg. those that learned to play Go, or Atari video games. Also, google "genetic programming", you will be surprised.

Reply

nanikore May 4 2016, 02:22:25 UTC
Programming depend upon conditionals.

If, then, else, et cetera. Syntax.

Code doesn't ever escape the Chinese Room.

Model a perfect responder to Chinese input, and absolutely no inroads are made in the creation of consciousness.

Alpha Go is still not conscious.

Good grief. I've seen examples of bottom-up AI before. One experiment lets loose little robots with light sensors and have them "self-alter" the program (a misnomer, there's nothing that's "self" about it). The trouble with these approaches are that the are all about specific outcomes chosen by the experimenter, like just plucking out the robots that don't go toward the light source and letting the rest stay. What exactly is specific about conscious behavior? Imagine letting those same robots just flounder around forever, with no guiding hands and no specific ending condition (e.g. robot goes toward the light). There's not going to be any "evolving" done, then.

Reply


Leave a comment

Up