I think Heidegger's challenge still stands. What he's asking isn't all that controversial. It's something like, "Is human experience a kind of computation of the mind on basic inputs?" He doesn't ask that question explicitly, but that's the position he's challenging all throughout the first part of Being and Time.
It's not a wild question. There's two parts to it. "Are there raw inputs?" and "Do we compute them?" They're kind of connected, since if there are "raw" inputs, you better do something to them. Kant, Hegel, Husserl, Merleau-Ponty, Sellars, Rorty, Brandom, and McDowell have all challenged the inputs dimension. Hegel, Marx, Nietzsche, Sartre, Adorno, Dreyfus, and Brooks have all challenged the computation dimension. This doesn't exactly break down across culture war lines.
It's a reasonable question that reasonable people are led to if they think about experience and information. There's no excuse in 2010 to react with scorn and dismissiveness toward Heidegger. If Being and Time is too hard for you, maybe you should pick something easier than philosophy?
Dreyfus is Heidegger's loudest and (with regard to information technology) most relevant follower. What he said in the 70s with regard to the AI program turned out to be true. But he's a naysayer. He thinks the whole program is doomed to failure because there's something special about human experience (following Heidegger, let's call it "facticity") which defies computation. In other words, no matter how hard you try, you're never going to create a strong AI, because the basic foundation or Urgrund of human existence is non-computable.
Be that as it may. That's not the real problem. The real problem is that facticity doesn't matter. What if you created a model of human experience that wasn't human experience? In other words what if you were able to create something which matched-from a purely behavioral point of view-human subjectivity but which had no actual subjective interiority? What if you were able to create, from inert, "dead" matter, something which was able to fool everyone else into thinking it was a person with an inner life, a being to whom belonged its own, personal experience-and yet there was no such thing inside?
What if the entire human race (or a large enough portion of it) convinced itself to switch over to such a mode of existence or way of being? What if, going by all outward appearance, no one could tell the difference?
That's what's really at stake here. Not whether machines are capable of thought, or whether a strong AI is possible. Thinking is a behavioral phenomenon. Given the right theoretical perspective, my ballpoint pen is capable of thought, since I can click the pin on the end and switch it between an "on" state (point extended) and an "off" state (point retracted). As for strong AI, I count myself amongst those who believe a machine will pass the Turing Test around 2030. Whether it happens in 2030, 2040, 2140, or 2015, there's no philosophical puzzle here. There's nothing worth thinking about in the philosophical sense, since there's nothing at stake in a mere technical problem. It's like asking when someone can run my Perl script to the ten trillionth term in under 5 seconds. Who cares? The real question is-What is the fate of interiority? Does inner truth matter? Does subjectivity have a future?