Ok, I've put this off for a while--I wonder why? The Critical Review (henceforth CR) seminar was full of smart people and was pretty much non-stop thought about important things. I expected that it would all spill over into a giant post immediately when I came back. But I guess instead I faced some burnout, or maybe just made the appropriate
(
Read more... )
Firstly, I’d like to get an idea of your conception of human nature. Personally, although I accept people aren’t perfectly rational or selfish (myriad of motivations), I think that they are generally rational and broadly self-interested (for example, I’d say that lack of information is a bigger cause of not-rational actions than irrational intentions…stymied by ignorance rather than irrational by design).
Secondly, in defense of the market, I’m going to argue that a) the market does a better job than you give it credit for - that an imperfect market can still be the best option. See all points above about skepticism about gov. actions. b) (and this is more relevant to your peak/end point) even if people don’t do a perfect job of assessing/evaluating their experiences, who else is going to do it? See central planning debate and how a technocrat has no access to the information needed to determine what people want. So, even if we are thrown off, how can anyone else step in to correct the situation?
Finally, again on the peak/end point, my intuition (I’ve done little cog sci reading) is that, if the discrepancy between the mean of the experience and the average to the peak/end is great, people would learn pretty quickly to distrust the peak/end average. In general, I distrust cog sci experiments because they often fail to take into account how marvelously humans learn/evolve/adapt. For example, there’s a whole line of new anti-choice criticism that argues that humans lose utility when they are faced with a superfluity of choices. There’s a bunch of experiments that show that people are less likely to make decisions and/or be satisfied with their decision if they are faced with many choices than if they are faced with a few choices. However (among several other problems) these experiments fail to take into account how humans adapt and learn to screen out these choices; when I first went to buy shampoo I literally felt a wave of vertigo in the supermarket - so many choices and nothing to base a decision on. I had no means of evaluating the products. A couple months later I had no problem - after a couple tries I settled on a shampoo that I liked. Similarly, I am now a Starbucks expert, whereas 6 months ago I was bewildered by the menu - this is slightly different, there’s more to distinguish the products, but still that scary amount of choice. Anyway, my whole point with this account is that I distrust cog sci experiments that are abstracted from actual existence because they often fail to take change-over-time into account.
re: your final notes
Hot - that’s about how I’d evaluate my experience: enjoyable and beneficial insofar as its exposed me to some ideas that I hadn’t encountered before and given me a couple new ways of thinking about things.
Reply
Firstly, I'd like to get an idea of your conception of human nature. Personally, although I accept people aren't perfectly rational or selfish (myriad of motivations), I think that they are generally rational and broadly self-interested (for example, I'd say that lack of information is a bigger cause of not-rational actions than irrational intentions...stymied by ignorance rather than irrational by design).
My conception of human nature...hmmm.
Humanity is the product of millions of years of natural selection. It is very good, relative to its competitors, at surviving and breeding in a natural environment. It has also proved to be incredibly adaptable, and continues to survive and breed in all sorts of unnatural situations as well.
As a result of this selection, humanity has several drives and several mental systems approximating "reward" and "punishment." Pleasure and pain are the results of some stimuli--those stimuli that would help with survival and reproduction in the wild.
There are also probably some things that we feel compelled to do, but which offer no real pleasure or pain as consequences.
And there are many cases where we think we will feel rewarded if we undertake a particular action, but actually end up not being rewarded.
(What I'm thinking of here is the "hedonic treadmill"--wherever you are, you will feel unsatisfied and want something more. This makes sense from an evolutionary standpoint: nobody ever got anywhere just sitting around and feeling good. We are good survivors, and crappy hedonists.)
Also, these systems of reward and punishment are not necessarily reducible to a common unit of measurement. I've got some urges, but whether I count one more highly than another depends on all sorts of things: my environment, how the question is framed, whether I'm depressed when I'm thinking about it....
[Oh, and don't forget that many of these urges and expectations are the result of successful cooperation strategies among individuals. People that share my genes are especially worth taking care of, from the perspective of a gene being selected for. Broader social interaction with non-kin groups can also be handy in many case. Altruism, even, can pay off, even if it never does so in any personal and psychological sense]
So, in sum: if we are taking a utilitarian moral system, then we are not naturally good at maximizing utility. In fact, the concept of utility does not actually refer to anything real in human psychology. There is no such thing as a "util" in my head. So the theory is founded on an approximation which is inaccessible to us.
Reply
Smart people, working in the abstract, have come up with methods (logic, statistics, computation algorithms) once thought people worked like they did. Then psychologist discovered, with robust empirical evidence, that they didn't. They called the first bit rationality, and the second is our messy jumble of neurons that does, in many cases, "good enough," but is by no means the best anyone or anything can do. The nice term for it is "bounded rationality"--what is rational with a limited amount of time and computing power? We are pretty good within our natural bounds. But we can do better by getting beyond those bounds--investigating something collaboratively and carefully for a long period of time. Physics, for example, was not the creation of one neanderthal. Neither was Austrian Economics.
I forgot where I'm going.
I guess just that it looks like in the AE and your models of human nature, individuals overcome their bounded rationality and their behavior converges on something really quite rational. My response is:
(a) There is no guarantee of that convergence. In fact, if it existed, why would you need a market? Or is the market a product of that convergence--one step in some eschatological coming of a long-run utopian equilibrium?
(b) When there is no convergence, you might have to do some things that violate exit in order to make things better.
(c) In general, people are so bad at judging their own "utility" that it's pretty hopeless. Great illustrative study: the comparative happiness of paraplegics and lottery winners. Have I mentioned it before? They are about the same. Happiness depends on intentional focus, not circumstances, but there's no reason why people should ever learn this naturally, since circumstances are what keeps you alive and breeding in the wild. So again, naturally crappy hedonists.
I feel like all this posting today has been sort of scatter-brained and meandering. Again, sorry.
I feel like all this posting today has been sort of scatter-brained and meandering, and I have no real comprehension at any particular point of the overall structure of what I'm saying. Again, sorry.
Reply
I don't think I've made any actual claims about how good the market does versus government interactions. I'm arguing that the market is imperfect, which you/Friedman/AE/CR-dogma sometimes sound like you're admitting, but sometimes rabidly attack with (sometimes a priori) arguments about how people will react to certain situations (i.e. pseudo- or long-run- rationally)
Reply
Reply
I think the "pointing out" is being done in other posts. At the time of this writing (before I've read your other comments...probably unwise) I think that you and Friedman and others have something in common in that while you're willing to say "markets are imperfect" all over the place, you then have models of human behavior in which the equilibrium is still just about perfect--people converging on what's best for them in the long run. And since the market is always correcting toward this utopian long-run thing, everything is ok.
To me, "markets are imperfect" means something more--that there are structural problems with markets that would make the long run equilibrium imperfect. I believe that this is very likely. If it is true, then other alternatives ought to be considered carefully.
Reply
Reply
I guess my problem is that I don't see quite what the argument is against non-free-market systems if there's no implicit guarantee of optimality in the free market system. Noticing imperfects seems to leave the question of altering things as fundemental to the system as legal rights, etc., up for discussion and with room for improvement. And once those are up for discussion, smaller government actions seem possible to.
Am I right in thinking that your reaction to this is "Well, often government actions cause a lot of harm"?
Reply
So, I am for experimentalism even though I think that gov actions cause alot of harm.
This might depend on too great a faith in our ability to learn from mistakes etc., but I don't see any other way of determining what is better/worse....
Reply
Ok, so this was where I really disagreed with Hayek. Sure, the people on the ground have access to information that is not available to the technocrat in the sky.
The reverse, however, is also true. Do you disagree?
A good (Seb-speak) system will use both to the extent that they are useful.
I feel like whenever we talk about this you lock yourself into a false free-market/totalitarian dichotomy that gives you this all-or-nothing feeling about technocratic rule (why should the benevolent super-AI choose to be a totalitarian when other methods work better?) The free market system, as posited by laizze-faire Austrian economics, is a technocratic proposal about how to organizing an economy, just like any other. It depends on one particular view of human nature. If that view gets challenged empirically and an alternative system is predicted to be more useful in solving social problems, where's the rub? Do you think that any alternative will necessarily discount individual preferences, if they are significant?
But aside from all this, I thought Hayek and von Mises' central planner arguments in the CR reading were silly. They artificially constrained the types of thinking a central planner could do [insisting that they work with aggregates], and then assume the only alternative is the "price mechanism".
First, it seems like the price mechanism at least to some extent in all sorts of impure market systems--welfare states, economies with a few nationalized industries, etc. What I'd like to see is some sort of analysis of how the price mechanism operates in these conditions. The Hayek argument seemed to be saying that prices would somehow "convey information"--what information is conveyed in an impure market?
Second, I think that if Hayek were a little more creative about the techniques available to a central planner he wouldn't be so confident. Theoretically (and given the utilitarian framework we're assuming despite it's shadiness), you could have a sort of voting system in which individual preferences were compiled without loss of information and then run a combinatorial optimization algorithm to figure out appropriate distribution. This way, the central planner could also take into account information that he/she/it had special access to. I don't see a rebuttal to this sort of thing coming from Hayek--I don't think he had a good understanding of the tools available.
Reply
Ok, I clearly haven't expressed the awesomeness that is the peak/end rule well enough.
This is the sort of experiment that they use to test this: (I've heard that they've done this under various other paradigms and with all the obvious variations as well--whenever somebody in the cog sci community comes up with a result, the first thing that happens is ten other people jump on it and raise objections. I mentioned this particular rule because I have seen Daniel Kahneman, who is a god, say himself that it's a pretty robust phenomenon...)
A subject is brought in and is given two stimuli:
Stimulus A: Hand is placed in 34 degree (read: painfully cold) water for ten minutes.
Stimulus B: Hand is placed in 34 degree water for ten minutes. Then, without removing the hand, the temperature of the water is increased by three degrees (so now it is 37 degrees, i.e., oh-so-slightly-less painfully cold), and the hand remains for three minutes.
A is supposed to be less painful, overall, than B--more time with a hand in painful water. But a peak-end rule wouldn't take duration into account and average in the less painful end with the peak. So a vote for B is a vote for peak-end.
The subject goes away, dries their hands, etc. Then they are asked to choose one of the two situations to repeat. They choose B (substantially more often than half the time).
So here's the deal: It's not like it happens to be that people think about the peak and end when they make judgments about the past, but if they thought it was bad they would change how they think.
What happens is that people actually remember an experience as better or worse based on the peak-end rule. They can (and presumably do) go their entire lives, in many cases, not realizing this. It just happens to be the effortless, automatic heuristic we are built to use.
How would you be able to correct for this, even over time, if your very perception of past pain depends on this mechanism?
And even if you did hear about the peak-end rule and tried to apply it to your life, I think it would be incredibly hard. Overriding these intuitive heuristics is a lot of work; in all the standard tests on logical reasoning or induction from experience which people bungle so often, they run the same test on trained logicians and statisticians and often get similar results. It takes a real conscious effort to beat your own associative pre-monkey brain mechanism and actually work through deductive thought. I don't see any reason why it would be different from recollections of pain.
On the last day, when we were halfway out the door, Friedman thought he could beat this with the comeback, "Well, what if what really matters in life is how you look back it--I want the best memories I can, that's how I maximize my happiness."
My only response to this is that maybe this is so, but that that seems like an ad hoc moral theory. Really, what does retrospective utilitarianism have going for it? Even if you adopted it, you would have substantial policy recommendations: You would want to give people long, boring stretches of life punctuated by a few big parties, or something like that.
Reply
How much text can I throw at you?
This is the last point:
I get very confused when Friedman says he "distrust systematic studies" and you say you "distrust cog sci experiments." What does this mean? If you want to say that you're not going to take every "study" at face value, that's great. But I'm not sure there's really any alternative that works better. If you're right about people adjusting to be comfortable with more choices over time, then cog sci experiments, not any sort of anecdotal "common sense," are going to put the nails in that coffin.
I'm not familiar with the choice literature you're referring to, but are you sure you're interpreting those claims correctly? Good scientists rarely make claims beyond what their experiments can show--otherwise they get skewered by colleagues. It sounds like the literature was right about your shampoo experience--you felt a wave of vertigo. Great. So what's your objection? Did the cognitive scientists themselves make the extrapolation that more choice will cause people to live their entire lives in vertigo, thus threatening your libertarian agenda? Or was that an implication you, or some readership-slutty science journalist, read into it.
Similarly, the peak/end guys aren't coming out with sweeping political conclusions. They are studying human cognition and perception. They can be challenged on scientific grounds, as you did above. The arguments about taking peak/end into account in politics are separate from the conclusions about the existence of the peak-end rule itself.
What's amazing to me is that many people seem to get the feeling that an empirical study will threaten a dearly held moral or political claim and so fight tooth and nail against the scientists who are working on studying that claim. This is what the intelligent design people are doing: they see Darwinism as a threat to their idea of moral order, so they dig in their heels and fight the science--you just have to distrust those evolutionary biologists when it looks like their conclusions could unravel God's word.
I feel like this is the free-marketeer reaction against the homo economicus--sure, they let him become fallible, etc., but he remains "generally rational," and his thoughts or at least his actions conform in the long run to those of a perfectly informed, rational agent. And anyone who says different is a heretic using bad science who we ought to just "distrust" away.
Reply
Leave a comment