The Chinese Room (
wikipedia) is, I think, my least favorite argument in the world. Which isn't fair, on my part. I can't be sure that Mr. Searle really conflates the several meanings of consciousness the way that ph|1050ph32z who reuse his argument do.
In one sense, anything might be conscious; this is the sense in which we are surprised that we really see the things we see, and frankly, this is surprising: but we have no necessary way to tell whether anything has this. Even, frankly, ourselves: because if we didn't have it, we'd still say we did. We'd still believe we did. Maybe in some way I know for sure, or maybe I just might as well believe it, because it matters more than anything else.
In the other sense, anything that is as aware of itself as of the external world, and can respond to both, is conscious. We certainly are. Computers certainly will be. Will computers be conscious the other way, too? As likely as we are.
Still unconvinced? Try this: what if the man in the room had a list of the laws of physics instead, no input, no output, and just computed the universe? (peculiarly, the differences between this case and reality might account for consciousness.)