Dec 03, 2010 01:08
So, advocates of imprecise probabilities say 1) that in the complete absence of evidence about some proposition, we should assign a set of probabilities covering [0,1] to that proposition, and 2) that we should update our probabilities upon learning some evidence by conditionalizing upon each member of our set of probabilities.
But if you think both of these claims are true, you get disaster. First, if you have a (convex) set of probabilities varying from [0,1], then guess what, you automatically get a set of [0,1] as the result of conditionalizing upon any piece of evidence whatsoever. But it gets worse! Suppose you sympathize with some sort of strict coherence or regularity condition, and say that total ignorance should be modeled by a set of probabilities between (0,1). Again, though, any update by conditionalization will result in (0,1). This is because for any 0