...because people don't understand statistics.

Dec 06, 2012 09:12

Prompted by one of my commentators, let's cut to the chase of The Black Swan, since that's necessary before one can tackle Antifragile. In fact, one of the great reliefs of reading Antifragile is that it assumes you've read or understood the previous two books; I am so sick of not finding properly advanced reading material that builds upon rather than repeats!

Arthur summarizes the point of The Black Swan this way: The technical reason is that you can't tell - even given a somewhat large sample - if there isn't an unseen mode many standard deviations away. Assigning a probability of o(1/N) for X=x is foolish for sample sizes O(N).

One can never rule out, from a finite amount of data, anything that was (or should have been) in the prior. The less likely something is, the less useful data is at reducing that probability. It is a mistake to fit the model to the data without retaining the prior and giving probability mass to rare events, even if most of them will never happen. One should assume that over a similar amount of time, it is likely that something will happen that is more extreme than any so-far observed event.*

People make this mistake all the time, and it causes a lot of damage when the thing that Can't Happen, happens, with this problem becoming bigger over time. Some of them have incentive to make this mistake because they don't have to pay for the damage, and some of them do have to pay but make the mistake anyway.

So far, so good, but there's a huge difference between people's naive prediction algorithms being wrong, and prediction being impossible or useless. Predictions are hard, especially about the future. Prediction is also both vital and inevitable. One must have some probability distribution for events whether or not one wants to admit that or say it out loud.

It is very important how probable a "Black Swan" is, if you care about whether or not there is a Black Swan. Taleb's answer is more or less that the answer is "more." It happens more than anyone is willing to admit. Fine, perhaps that is mostly true. It doesn't tell you the right course of action. How much more likely? Of what form? Taleb's actions guard against or profit wonderfully from certain types of possible rare events, but break when faced with other events that seem even rarer, leaving him even worse off than everyone else; he admits this, in saying that things that are Antifragile are only Antifragile up to a point.

Betting that the stock market will collapse is a strange bet, because the events that collapse it most effectively also mean you won't get paid your winnings, or won't have any way to spend them. Taleb, as it turns out, won such a bet and made a lot of money, but he doesn't have any way to usefully spend it. He gets more utility out of having been right, than in having the money, and in fact in Antifragile he illustrates this with stories of other people in the same position. At that point, how good were the odds you really got, and does it matter whether you had a "good bet" in terms of dollar Alpha/EV? It seems very easy for Taleb to gain utility by making bad bets on volatility, and it seems very easy for him to lose utility by making good bets.

This is, of course, like everyone else he's criticizing for three books. They make bad bets in dollar terms that, given the person's situation in life, are actually good bets. Sometimes they do it on purpose, sometimes not. Those who he praises often are doing the same thing.

* Note that power law distributions are common, and often very good at probabilistic prediction of events that Taleb calls Black Swans. Explicit examples from his books include 9/11 and the recent large earthquake in Japan; both were, as Nate Silver shows in The Signal and the Noise: Why Most Predictions Fail but Some Don't
, right on the otherwise expected distribution lines. Taleb calls the war that destroyed his home in Lebanon a Black Swan, but it was the 8th time the place had been turned to rubble. This was the Middle East, after all.
Previous post Next post
Up