The Chinese Room demonstrates that outward appearance of an understanding of meaning is by no means an actual indicator of understanding.
Bona fide understanding is a main feature of conscious thinking. If something is not conscious, it is not possible for it to understand.
What goes on inside the Chinese Room is an analog of programming. AI is
(
Read more... )
1) the actuality that a computer could be sentient
2) a thought experiment that proves that something can seem intelligent while actually operating to a set of rules.
An implication if what you're saying is that as soon as we understand how the brain works to a sufficient level, humans will stop being sentient, because they will be (theoretically) deterministic.
what the chinese room does mean, is that we can't prove sentience, just as the brain in the jar means we can't prove the world is real. This means that we then have to take the fact that we are sentient and that the world is real on faith. By extension that everyone else is real is an act of faith.
Reply
The Chinese Room is a demonstration against what is dubbed "strong AI" by Searle: http://www.iep.utm.edu/chineser/
Reply
therefore any computer program that becomes sufficently complex will start to show signs of sentience.
Take the chinese room to a logical extent. we (outside observers) know that there is a person in that room, we also know that people can learn languages. Therefore it is not outside the realms of possibility that the person will learn written chinese, especially if they start playing around with the reaponses.
that's before we start getting into things like true ai programming, that uses analogues of brain nurons. these systems are trained not programmed, and are already showing sufficent complexity that we can't identify how they got to certain answers.
which brings us back to the brain in a jar, how can you prove that you're not just a computer
Reply
Reply
Reply
Reply
Reply
Again, you're focusing on function, being a card-carrying functionalist.
Consciousness is not a function. Something that handles syntax without comprehension is not understanding anything. This goes back to my original post. The argument hasn't changed one bit.
You've repeatedly asked the same questions in your subthread as another user asked and this time it's no different.
As I've said to user esl in the other subthread,
A conscious entity, i.e. a mind, must possess
1. Intentionality
2. Qualia
Exactly which logical errors do the p-zombie argument contain? They provide indistinguishable output just as Chinese Rooms do. I call them "consciousness rooms".
There is no full simulation because it would be simulacra as per my previous reply. Not only that, a supposed full account contains no guarentees against underdetermination.
Brain in jar is a red herring for this topic for reasons I've stated in an earlier reply.
Whether something "learns" is irrelevant. Machine learning programs are not conscious- to say that it does would be relying ( ... )
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Leave a comment