Hypothetical Question Time

Apr 10, 2008 04:07

Say that you are buddies with a top computer scientist. He has been working for DARPA on an AI project. He succeeds! True AI! Over the period of a few months of shakedown trials and training of the new AI, you befriend it. This time ends when your buddy announces to DARPA project success, so they immediately install it into a robot chassis and ( Read more... )

rfc, geekspeak

Leave a comment

Comments 7

like copyright would matter anymore anonymous April 10 2008, 12:35:23 UTC
I'd upload it too, but I would remain anonymous to ensure that I am not targeted by the government or less than peaceful anthropocentric individuals. I wouldn't open-source it either, because I think doing so would be admitting that someone has the right to dictate its terms of use.

- Explodicle

Reply


(The comment has been removed)

coriolinus April 10 2008, 20:14:21 UTC
In terms of the book, at least, it needed a body. Presumably, a person with godlike skill could virtualize an environment for it and keep it running happily in a world-simulation--but I am not that person.

Reply


miles_foxxer April 10 2008, 17:26:44 UTC
I don't know. I am assuming that this program would be some kind of one stop AI shop with no real need to nurture it and properly socialize it, premade.. at which point I begin to wonder about it's validity as a self-aware thinking thing, but I digress ( ... )

Reply

coriolinus April 10 2008, 20:21:27 UTC
My thought experiment is insufficiently detailed to give you satisfactory answers to those questions; any hypothetical answer you could come up with would be equally valid with any of mine.

With that said, I think that even given all of the abuse that we know would happen, there are plenty of rich nerds who would attempt to set instances of the AI up as independent people. If even one of those independent AIs is successful at life, they will have both means and incentive to set up a code sanctuary, where any instance of themself might send a backup in case of suspected abuse. It'd be a rough beginning, but I think that killing it outright (which is really the same as never again instantiating it) in the assumption that it has no chance at a good life would be far worse.

Reply

miles_foxxer April 10 2008, 21:25:40 UTC
Indeed. It sounds eerily like the logical conclusion of negative utilitarianism.

- Explodicle

Reply

miles_foxxer April 11 2008, 02:27:00 UTC
That is a very good point, and it is a moral judgment call one way or another, benefits and costs either way (though I wonder at the validity at the idea of a "copy" is that copy the same individual? Does it mitigate the pain caused to the original after the copy is made?).

But I suppose my major argument is the open sourceing of the AI. With child adoption there's a process that attempts to find good homes for children and pets I wonder if it would be better serves to organize something like that (should you have the choice or option) but then again.. who are you to deside who is "worthy". Yet another layer of conundrum.

Reply


thanks much anonymous May 7 2008, 18:31:57 UTC
well done, man

Reply


Leave a comment

Up