(Untitled)

Oct 03, 2008 11:29

Another poll - sorry, I still haven't gotten around to writing a decent post!

Poll MoralityETA: I've been thinking lately that since there is no objective morality, we should try to work out another approach to ethics. I'm not sure whether this approach would be based on utilitarianism, sympathy/empathy, enlightened self-interest or what. Perhaps the poll ( Read more... )

Leave a comment

andrewducker October 3 2008, 12:18:23 UTC
I was on an extropian mailing list with Yudowksy a long time ago, and I see he's still talking himself largely around in circles. He's built his ontologies in such a complex way that everything requires 5000 words to explain. Shame, as he's clearly very smart.

I'm with you on morality - I think it's personal and interpersonal.

Reply

pw201 October 3 2008, 13:16:21 UTC
I think his key insight is that there's no moral bedrock separate from the big computation we already carry around in our heads, a computation which compels us to action (created already in motion in his phrase), which covers both what we think we should do and what arguments might change what we think we should do (q.v. the example of whether someone would take a pill that made them think it was right to eat babies).

He claims this is not moral relativism for reasons I don't understand. I think his point is that you judge any morality by where you're standing already, so there is no value-free position from which to judge, therefore each of us considers that starting position as a single-valued function (of the situation, not of the situation and the person considering it). I think his approach is what most people consider moral relativism, but it lacks the namby-pamby "we shouldn't judge other cultures" stuff which goes along with it sometimes.

Reply

meihua October 3 2008, 13:28:09 UTC
If he isn't talking to the Singularity Institute, he should be!

This idea of a "constant computation" which not only "doesn't want to do bad things" but also "doesn't want to change itself in such a way that it would then want to do bad things" is core to the SIAI's ideas about how Friendly AI should work.

Reply

meihua October 3 2008, 13:29:06 UTC
*laughs* And I just read your post again and saw the word "Yudkowsky" (rather than skimming), so, ah, yeah, I reckon he's talking to the SIAI! :D

Reply

andrewducker October 3 2008, 14:15:19 UTC
I think his approach is what most people consider moral relativism, but it lacks the namby-pamby "we shouldn't judge other cultures" stuff which goes along with it sometimes.

Exactly. He's weaseling out of calling it moral relativism, when it clearly is.

I had similar arguments with him many years ago, and he was anti-moral-relativism then too - so it's probably a left-over reflex against it :->

Reply


Leave a comment

Up