A while back, Scott Aaronson posted his slides from a "crazy idea session"
asking why theoretical physicists get 8 billion dollar machines while theoretical computer scientists tend to be funded on a less generous basis. What might theoretical computer scientists find to do with $8B in hardware? His suggestion is to look for some lower bounds on unknown problems--- like the minimal number of operations necessary to calculate the determinant of a 4x4 matrix.
Without getting into the why-does-physics-get-the-big-toys debate, I think that Scott is a little off the mark here because the sort of data you get from the Large Hadron Collider is significantly different from what would result from knowing a specific lower bound. Over its lifetime LHC will produce lots of data and help out a lot of physicists. Knowing the answer Scott is looking for will help few computer scientists and produce not much data.
Of course the whole exercise is tounge-in-cheek, but what mathematics or computer science projects would be more LHC-like? Or Human Genome Project-like?
Perhaps Aaronson's approach is too narrow--- rather than discovering a particular minimal algorithm we should be building up an entire library of known lower bounds. But this still wouldn't advance the state of knowledge much. Investigating mathematical conjectures to bigger and bigger sizes is always a favorite pasttime, but again doesn't seem to really have broad interest to the mathematical community (unless a counterexample were found.)
If I had an $8B research grant I'd probably just use it to "research" the effects of true high-speed Internet on local communities. :) It's a sum too huge for even the most bloated and inefficient hardware development project. (Sun spends about $2B/year on R&D.) $8B might answer some interesting questions about poker theory but wouldn't justify a return on investment. Developing an AI isn't something we can just throw $8B at and start work on--- it's a project that requires time and insight, I think.