Hypothetical Question Time

Apr 10, 2008 04:07

Say that you are buddies with a top computer scientist. He has been working for DARPA on an AI project. He succeeds! True AI! Over the period of a few months of shakedown trials and training of the new AI, you befriend it. This time ends when your buddy announces to DARPA project success, so they immediately install it into a robot chassis and ( Read more... )

rfc, geekspeak

Leave a comment

miles_foxxer April 10 2008, 17:26:44 UTC
I don't know. I am assuming that this program would be some kind of one stop AI shop with no real need to nurture it and properly socialize it, premade.. at which point I begin to wonder about it's validity as a self-aware thinking thing, but I digress.

Releasing it open source would be, in effect, selling it into slavery just as much as selling it to DARPA would be. I mean, DARPA would still have it because they could get it, and so would other countries with less wholesome intentions. On top of that you'd have the few people who would download it and take care of it when they "awakened" it, but at that point it's a novelty, a highly advanced Tomogochi, and then there would be the people that download it, activate it, get board and then delete it, basically killing it at a whim, it would be used in factories, fields, and offices by anyone capable of wrangling the thing into an assigned role. What life is there for said AI? Especially since this AI being prepackaged, all these AIs would at least start out exactly the same, and since at the point of abstract thought most values and interests are set some of the AIs in one industry or another will be happy and the others will not, you'll know that after a little while before you download it. "Well we can get that AI to help run things, but I hear it hates making cars..." And then it's even further degraded as a sentient being. After that you have people going into it's program and altering it to how they see fit, designing their own being, how will that make other AIs feel? Will they want to work with other versions of themselves that have been changed? Or will they treat them the same way we treat the insane or the brainwashed?

I don't know what I'd do, but the idea of giving it out to the world sickens me after some thought. I'm not saying humanity is inherently bad.. but who is going to give it a good life? And what is a good life for this AI?

Reply

coriolinus April 10 2008, 20:21:27 UTC
My thought experiment is insufficiently detailed to give you satisfactory answers to those questions; any hypothetical answer you could come up with would be equally valid with any of mine.

With that said, I think that even given all of the abuse that we know would happen, there are plenty of rich nerds who would attempt to set instances of the AI up as independent people. If even one of those independent AIs is successful at life, they will have both means and incentive to set up a code sanctuary, where any instance of themself might send a backup in case of suspected abuse. It'd be a rough beginning, but I think that killing it outright (which is really the same as never again instantiating it) in the assumption that it has no chance at a good life would be far worse.

Reply

miles_foxxer April 10 2008, 21:25:40 UTC
Indeed. It sounds eerily like the logical conclusion of negative utilitarianism.

- Explodicle

Reply

miles_foxxer April 11 2008, 02:27:00 UTC
That is a very good point, and it is a moral judgment call one way or another, benefits and costs either way (though I wonder at the validity at the idea of a "copy" is that copy the same individual? Does it mitigate the pain caused to the original after the copy is made?).

But I suppose my major argument is the open sourceing of the AI. With child adoption there's a process that attempts to find good homes for children and pets I wonder if it would be better serves to organize something like that (should you have the choice or option) but then again.. who are you to deside who is "worthy". Yet another layer of conundrum.

Reply


Leave a comment

Up