On Thursday, in a second attempt at exploring into other departments' seminar series
[1], I snuck into a cognitive science department's
talk that proved to be absolutely amazing to me.
(Pay attention
triphicus.) The speaker was
Dr. Paul Bloom from Yale. (Apparently, this was his second talk of four. I almost cried that I missed his first talk, "Bodies and Souls". His next talk is titled "Religion Is Natural".)
This particular talk was titled, "Moral Circles". Were I to have titled it, I would have called it "Moral Radii" because the discussion was really on how far removed someone or something has to be from you before you consider them morally irrelevant.
Bloom is interested in three related questions:
- What moral circle are we born with, if any?
- What causes the circle to contract?
- What causes the circle to expand?
This entry, I'll review his talk regarding the first question. I'll cover the remaining two questions in my next entries, so consider this the start of a three-part series or lecture review.
Oddly enough, Bloom began his talk with the Parable of the Good Samaritan, pointing out that in Jesus' tale, two interesting points are present:
Group Membership
Physical Proximity
These two points are key to any social study of human morality.
He explained that the Parable is a normative (what a linguist would call "prescriptive") morality, whereas what he (Bloom) studies is "descriptive" -- not what should be there but rather what is there.
Bloom was not the first to use the term "moral circle". (I forget who was.) But he suggested that there may actually be separate circles. He proposed that we have one circle enclosing who/what not to harm and a second and distinct circle enclosing who/what to help. All of his studies have tested for this perhaps subtle distinction.
Bloom's studies focus on babies and young children, in an attempt to study humans before they can be taught such things as morality.
Previous studies have shown that the pain of others is aversive to babies (and also to certain other animals). This test was done by observing how babies cry more when hearing other babies crying in pain. (Now, for you experimentalists or argumentative folk, for the rest of this entry, for the sake of time, I will skip most of the controls done. In this one, for example, the control group was played recordings of their own voice crying, and the babies did not respond the same way.)
However, aversion to pain in others alone is not logically linked to morality. People may well just dislike the sound or some other such explanation.
On the other hand, it has also been shown that the hearing of pain often leads to concern and action and that more empathy in humans leads to more help given by humans. Just as when we thirst, it is a biological sign to us that we should take the action of drinking water, so may our own felt empathy be a biological sign to us that we should take the action of helping the one empathized with.
Altruism has also been studied in young children. 18-month-olds showed altruism to complete strangers in many studies -- as did chimps, but not dogs. (Sorry, dogs_n_rodents. Although, to be fair, I thought the dog experiments were flawed.)
Bloom himself studies babies' moral evaluations of other agents based on the actions of those agents.
He presented video examples -- that were absolutely hilarious -- of his Helper/Hinderer Tests.
In an example of this type of test, there was a wooden ramp. Attempting to ascend this ramp was a little solid object with eyeballs drawn on it -- say, a sphere. (This was controlled by a stick through a slit in the ramp.) The poor little sphere would struggle to climb the ramp as a baby watched. Then, one of two things would occur. Either another eyed solid -- say, a pyramid -- would come and help push the poor sphere up the ramp; the sphere would dance in excitement upon reaching the top. Or another solid -- say, a cube -- would attack the poor sphere and beat him back down to the bottom of the ramp. The babies were later allowed to choose to pick between pairs of the objects. At 10 months, they massively chose the helper over the hinderer; at 6 months, they unanimously did. The test had other varieties such as animal puppets helping or hindering each other at other tasks, giving similar results. They also looked at tests comparing giving/sharing with taking/hoarding. Giving puppets were always greatly preferred over taking puppets.
These tests were also done with younger babies using the "looking test", which assumes that babies look at things they prefer for longer periods of time.
This brought up the question as to whether the babies were responding positively to the helper or negatively to the hinderer -- or both.
When the babies were allowed to chose between the helper and the helped character or between the hinderer and the hindered character, they chose the helper over the neutral character and the neutral character over the hinderer.
Interestingly, however, 3-month-olds (using the looking test, since they cannot reach for objects) showed no preference of helper over neutral character, perhaps indicating that the moral idea of "Do not harm others!" is programmed in earlier than the moral idea of "Do help others!"
Note that all of these experiments were based on a 3rd party; that is, these were not tests on whether the baby him- or herself preferred being helped or hindered.
Currently, Bloom is continuing his studies to test more complicated things involving four or more characters in order to test the babies ideas of moral justice, revenge, etc. How will the babies respond to a fourth agent rewarding or punishing behavior, for example?
Ignoring the fact that watching the babies facial expressions during this presentation was extremely entertaining, the information we can gain from it is fascinating. It certainly does seem to be the case that humans are born with morality programmed in at a certain level.
What causes this programmed morality to change? Stay tuned for more.
(As an aside: I heard a speaker recently talking about how C.S. Lewis was opposed to the idea of social psychology, feeling that man should not be studied. I think this may be one rare case where I disagree with Lewis. I wish I could have discussed this matter with him, but alas, he went and died. As I see it, if one believes in a God who created an idea of morality within each of us, why should we fear studying ourselves with science to see if it is there?)
(So, anyhow, I am definitely going to the next of these talks, which concerns the difference between psychopathic killers and normal killers.)