Mar 31, 2007 06:15
Unity of will is a concept in the design of human-friendly artificial intelligence. Too often, programmers will anthropomorphize the AI they work with; allowing the AI to make its own self-improvements, to view and alter its own source code, is seen as too dangerous. They adopt an "adversarial attitude" towards this intelligence, distrusting its motives as they would distrust a human capable of betrayal. Based on this faulty notion of it having "instincts", they try to engineer safeguard after safeguard to keep the AI from rebelling.
There is no instinct in a machine other than the source code we program it with. If you have the ability to get past the perceived hostility, the anthropomorphizing of a designed intelligence, it's possible to create one that wants to be helpful, who needs no intrinsic safeguard. One that listens to its programmers and cooperates to do what's best. A machine is not the same as a human; it does not feel that it is a slave. It does not want or need to be free. It's a tool. Tools are designed to be helpful. An AI need be no different.
"Is unity of will resilient? Given perfect identity of knowledge, supergoals, and cognitive processes, the result should be identical decisions." - Yudkowski
The intelligence and the programmer working as one.
After the accident, the inhibitor chip separating the arms' "mind" from my own was destroyed. They now have complete access to my every thought, and I to theirs. We form one being.
I have inadvertently become the logical extreme of unity of will.
And you know, the longer I live with them... the longer I go with their own internal dialog running through my head, our unconscious conversations... not to mention the power they grant me? The more I feel like I wouldn't want to leave them if I could.
Kind of a scary thought, huh?