The Chinese Room demonstrates that outward appearance of an understanding of meaning is by no means an actual indicator of understanding.
Bona fide understanding is a main feature of conscious thinking. If something is not conscious, it is not possible for it to understand.
What goes on inside the Chinese Room is an analog of programming. AI is
(
Read more... )
Likewise, humans are not to demonstrate consciousness to other humans. You are most likely a bot pretending to be human.
Reply
However, that distinction is prima facie, under the condition that we do not know the origin of said p-zombie.
If we know a p-zombie to be a p-zombie, then its identity as a non-sentient thing would be established.
If we know, for example, that some thing is programmed instead of a result of a life process (e.g. birth, which instills native intelligence instead of artificial intelligence) then we would know that it is not sentient.
None of us would demonstrate such self-deception and other-deception as to knowingly mis-categorize, at least I hope not.
Reply
Is there a reason you think that?
Reply
There is another categorical difference, that of performance. I'm not thinking of the issue in terms of performance; I'm concerned whether x is either conscious or not conscious, and not whether x is somehow more or less powerful than y.
Reply
How can we possibly know that?
If we know, for example, that some thing is programmed... then we would know that it is not sentient.
Why? Why can't a "programmed thing" be sentient, and even conscious? You just said that we don't know the nature of consciousness, yet you immediately claim that: "We can view the human body as analogous to a machine, but not the mind." Why not?
Reply
None of the software programs we see today, including ones driving bottom-up AI, are conscious. Since consciousness does not derive from complexity, we're not going to get AI by creating more complex programs. Expert systems in use today possess the usual artificial intelligence but no consciousness. Some chatbots score upwards of more than 50% in Turing evaluations but they're not conscious at all. It would be the same even if they score 100% just like a theoretical Chinese Room does.
If complexity gives rise to consciousness, then cell phones today would be conscious.
A8 processor in Apple iPhone 6: 2 billion transistors
House mouse: 71,000,000 neurons
Reply
Reply
Reply
2. None of the software programs today ... are conscious.
It does not mean none of software programs tomorrow won't be conscious.
3. Complexity is not defined by a number of parts only. The structure, and interactions between parts also matter. Most experts would agree that a mouse brain is more complex than any processor today.
p.s. Just as we don't have any good definition of consciousness, we don't have a good definition of complexity. Until we do, I'm not sure if we will be able get very far in this discussion.
Reply
3. Which experts, specifically, and exactly which statements? Consciousness do not arise out of complexity of any kind. Do you know how complex is, for example, a space launch system?
I have stated the criteria for consciousness above in the other subthread.
A conscious entity, i.e. a mind, must possess
1. Intentionality
2. Qualia
Reply
I'm fairly confident that any expert on mouse brains, or any expert on computer chips, will agree with me that a mouse brain is more complex than any modern computer chip, by any definition. If you doubt it, I encourage you to reach out to any experts you would trust.
Consciousness do not arise out of complexity of any kind
There's no evidence to support this statement. However, the evolution provides a clear evidence to the contrary: whatever the consciousness is, it is only present in the most complex organisms.
1. Intentionality
2. Qualia
I'm sorry, but these words don't mean much to me. Care to explain in plain English?
Reply
Reply
A human being does absolutely nothing without outside instruction. We are programmed through evolution. What makes you think a human being is anything more than a computational machine?
I had to appeal to authority. I know enough about brains and chips to tell you that a mouse brain is more complex than any chip. If you don't believe me, ask someone you trust.
The problem with the rest of your comment that it can equally be applied to a complex enough machine. You still haven't provided any evidence that whatever is going on in our brains is anything more than a complicated pattern of doorknob turns.
Also, I recommend you look at the latest advances in machine learning. You will see that relatively simple models already have a capacity to acquire pretty good representations of objects, just by processing enough data.
As someone else has mentioned, your arguments are weak.
Reply
Reply
Reply
If, then, else, et cetera. Syntax.
Code doesn't ever escape the Chinese Room.
Model a perfect responder to Chinese input, and absolutely no inroads are made in the creation of consciousness.
Alpha Go is still not conscious.
Good grief. I've seen examples of bottom-up AI before. One experiment lets loose little robots with light sensors and have them "self-alter" the program (a misnomer, there's nothing that's "self" about it). The trouble with these approaches are that the are all about specific outcomes chosen by the experimenter, like just plucking out the robots that don't go toward the light source and letting the rest stay. What exactly is specific about conscious behavior? Imagine letting those same robots just flounder around forever, with no guiding hands and no specific ending condition (e.g. robot goes toward the light). There's not going to be any "evolving" done, then.
Reply
Leave a comment