Critical Critical Review Review

Jul 27, 2005 22:27

Ok, I've put this off for a while--I wonder why? The Critical Review (henceforth CR) seminar was full of smart people and was pretty much non-stop thought about important things. I expected that it would all spill over into a giant post immediately when I came back. But I guess instead I faced some burnout, or maybe just made the appropriate ( Read more... )

milton friedman, critical review

Leave a comment

Humanity (2) paulhope August 3 2005, 16:02:53 UTC
Meanwhile, we are smart. Well, smarter than lions and tigers and bears. But not that smart. Some things we do about as good as anyone can figure out (causal reasoning, for example, is a task people perform on about as well as the best normative models). Other things we evolved short cuts for. We simply didn't have the computing power or the proper hardware on which we could take an integral over "pleasure"--"pleasure" doesn't mean anything, anyway. So we develop the peak-end rule. It's faster. Similarly, rather than considering all available hypothesis about the world and check for empirical confirmation, we pick one and stick to it until long after it would be discredited by any sort of normatively-operating induction robot.

Smart people, working in the abstract, have come up with methods (logic, statistics, computation algorithms) once thought people worked like they did. Then psychologist discovered, with robust empirical evidence, that they didn't. They called the first bit rationality, and the second is our messy jumble of neurons that does, in many cases, "good enough," but is by no means the best anyone or anything can do. The nice term for it is "bounded rationality"--what is rational with a limited amount of time and computing power? We are pretty good within our natural bounds. But we can do better by getting beyond those bounds--investigating something collaboratively and carefully for a long period of time. Physics, for example, was not the creation of one neanderthal. Neither was Austrian Economics.

I forgot where I'm going.

I guess just that it looks like in the AE and your models of human nature, individuals overcome their bounded rationality and their behavior converges on something really quite rational. My response is:
(a) There is no guarantee of that convergence. In fact, if it existed, why would you need a market? Or is the market a product of that convergence--one step in some eschatological coming of a long-run utopian equilibrium?
(b) When there is no convergence, you might have to do some things that violate exit in order to make things better.
(c) In general, people are so bad at judging their own "utility" that it's pretty hopeless. Great illustrative study: the comparative happiness of paraplegics and lottery winners. Have I mentioned it before? They are about the same. Happiness depends on intentional focus, not circumstances, but there's no reason why people should ever learn this naturally, since circumstances are what keeps you alive and breeding in the wild. So again, naturally crappy hedonists.

I feel like all this posting today has been sort of scatter-brained and meandering. Again, sorry.

I feel like all this posting today has been sort of scatter-brained and meandering, and I have no real comprehension at any particular point of the overall structure of what I'm saying. Again, sorry.

Reply


Leave a comment

Up