Say that you are buddies with a top computer scientist. He has been working for DARPA on an AI project. He succeeds! True AI! Over the period of a few months of shakedown trials and training of the new AI, you befriend it. This time ends when your buddy announces to DARPA project success, so they immediately install it into a robot chassis and
(
Read more... )
Releasing it open source would be, in effect, selling it into slavery just as much as selling it to DARPA would be. I mean, DARPA would still have it because they could get it, and so would other countries with less wholesome intentions. On top of that you'd have the few people who would download it and take care of it when they "awakened" it, but at that point it's a novelty, a highly advanced Tomogochi, and then there would be the people that download it, activate it, get board and then delete it, basically killing it at a whim, it would be used in factories, fields, and offices by anyone capable of wrangling the thing into an assigned role. What life is there for said AI? Especially since this AI being prepackaged, all these AIs would at least start out exactly the same, and since at the point of abstract thought most values and interests are set some of the AIs in one industry or another will be happy and the others will not, you'll know that after a little while before you download it. "Well we can get that AI to help run things, but I hear it hates making cars..." And then it's even further degraded as a sentient being. After that you have people going into it's program and altering it to how they see fit, designing their own being, how will that make other AIs feel? Will they want to work with other versions of themselves that have been changed? Or will they treat them the same way we treat the insane or the brainwashed?
I don't know what I'd do, but the idea of giving it out to the world sickens me after some thought. I'm not saying humanity is inherently bad.. but who is going to give it a good life? And what is a good life for this AI?
Reply
With that said, I think that even given all of the abuse that we know would happen, there are plenty of rich nerds who would attempt to set instances of the AI up as independent people. If even one of those independent AIs is successful at life, they will have both means and incentive to set up a code sanctuary, where any instance of themself might send a backup in case of suspected abuse. It'd be a rough beginning, but I think that killing it outright (which is really the same as never again instantiating it) in the assumption that it has no chance at a good life would be far worse.
Reply
- Explodicle
Reply
But I suppose my major argument is the open sourceing of the AI. With child adoption there's a process that attempts to find good homes for children and pets I wonder if it would be better serves to organize something like that (should you have the choice or option) but then again.. who are you to deside who is "worthy". Yet another layer of conundrum.
Reply
Leave a comment