Dec 24, 2016 12:00
proportional_representation,
roads,
music,
work,
patriarchy,
women,
conservatives,
movies,
london,
france,
russia,
military,
israel,
emotion,
doctors,
ocean,
links,
history,
airplanes,
men,
ohforfuckssake,
technology,
argument,
voting,
starwars,
ai,
intelligence,
labour,
solarpower,
gender,
fish,
politics
Leave a comment
I agree that some of the arguments against AI panic are dodgy, but that makes sense given that the author (like myself) find the entire AI panic belief-set to be ludicrously wrong headed. I've read both the article and the rebuttal, and to me there are two key points to the argument:
1) I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us.
But if you’re committed to the idea of superintelligence, AI research is the most important thing you could do on the planet right now. It’s more important than politics, malaria, starving children, war, global warming, anything you can think of.
This is (from my PoV) crucial and doesn't even mention that fact that climate change could end or seriously impact industrial civilization &/or result in hideous resource wars. We know this is a very real and pretty darn immediate threat, so worrying about a threat that is both potential and at minimum 20 years in the future seems to go well beyond foolish. I'd love to see more tech folks getting into sustainable energy & practical geoengineering rather than worrying about events that are guaranteed not to happen if climate change goes as badly as some researchers believe.
2) There's a parlor trick, too, where by multiplying such astronomical numbers by tiny probabilities, you can convince yourself that you need to do some weird stuff.
This to me is the other key point. I view AI panic much like I view discussions about utter nonsense (to me at least) like philosophical zombies, with part of the problem coming down to the issue that even a fair number of exceeding intelligent and well educated people seem to believe that just because something can be imagined and may be possible (but is far from proven to be possible), it's worth spending time worrying about or seriously considering, rather than being considered a fun intellectual game with no relation to the physical world.
There's also the other argument that I've mentioned before that I didn't see in the original piece, that AI panic only makes sense wrt hard take-off singularities, which to me seem vanishingly unlikely w/o the prior development of Drexlerian self-replicating nanotech (which may be impossible, for reasons ranging from thermodynamics to the prior existence of hungry and fast evolving biological replicators) and is clearly not easy). W/o that, unless you believe (with little or no evidence) that greatly increased intelligence requires nothing more than drastically improved software, going from human x 1 to human x 1,000, or even human x 1.2 intelligence would require improved hardware, that must be ordered, shipped, and installed - one piece at a time - for each iterative improvement, and in some cases, computer designed hardware would likely need to also be manufactured, meaning that each iteration could take between a week or more to a year or more, which seems to be plenty of time to notice and correct problems.
Reply
Leave a comment