Leave a comment

heron61 December 18 2015, 12:23:38 UTC
Should AI Be Open?The more I read about fear of superintelligent AIs, the less convinced I am that we have anything to worry about (at least in terms of AIs doing their own thing, to our detriment). The basic arguments really only make sense in the event of a hard take-off singularity. If going from human-level AI to 1.5 x human level AI, and then going to 2 x human level AI takes months or years, then there's time to work with the system, solve problems, and prevent anything from going drastically wrong if it reaches impressive levels of superintelligence ( ... )

Reply

cartesiandaemon December 18 2015, 14:00:42 UTC
That's about how I feel.

Although I also wonder if someone forgot to carry a 1 somewhere, and "superhuman" is a red herring: if you look at the algorithms for things like "calculation of credit scores" and "financial trading algorithms" and "amazon pricing algorithms", they're clearly no further along toward a general AI and don't have any self-awareness, but have more and more power in society and increasingly hard to do without...

Reply

andrewducker December 18 2015, 18:07:41 UTC
Oh yes, "AI" is changing society in many deep ways, even without being even slightly conscious.

Reply

woodpijn December 18 2015, 14:11:00 UTC
I have a philosophical objection to superhuman "conscious" AI ever being possible; but I could be wrong about that, and if I am wrong and it is possible, then I'm fairly convinced by Scott's arguments in favour of hard takeoff (e.g. years of evolution to get from cows to early hominids versus early hominids to us; years to build a human-speed vehicle versus years to build a 2x-human-speed vehicle).

Reply

andrewducker December 18 2015, 18:13:26 UTC
I'm somewhat sceptical. But I also know that people have a tendency to load more and more responsibility onto systems while assuming that things will be ok.

I'm almost more nervous that we'll do it by accident, and then realise that we're totally dependent without planning anything.

Reply

heron61 December 18 2015, 22:01:08 UTC
Sure, but that's not what the AI-fearmongers are on about. They are all about making AI benevolent so that when the superintelligent AI-God erupts out of Google's forehead or whatever, it won't turn us all into paperclips or mindless but eternally happy people living in life-support pods.

If superintelligent AI is possible (which I believe is likely, but obviously not certain), I see no reason to suspect that it won't be a long difficult process rather than achieving a human-level AI which then suddenly mysteriously bootstraps itself into godhood - it seems to me that if this were possible, then the many human-level humans who have been working on AI would have managed to create a human-level AI already.

Dependency on software is already a problem - the stock market being an obvious example. However, it's at least as much a human problem as a software issue.

Reply


Leave a comment

Up