(Untitled)

Nov 17, 2010 19:28

I half-heartedly competed in this http://kaggle.com/chess?viewtype=results, which was a competition to predict chess match outcomes based on observed past matches. ( ... )

Leave a comment

Comments 7

gustavolacerda November 19 2010, 05:55:30 UTC
Cool!

Reply


gustavolacerda November 19 2010, 05:59:55 UTC
I would think that the natural loss function here would come from a proper scoring rule, since that by definition incentives honest reporting of subjective probabilities.

Reply

random_walker November 19 2010, 07:08:36 UTC
I like the concept of a proper scoring rule (note that quadratic loss is one). However how often is one interested in extracting a subjective probability? In this case the object of interest seems to be the outcome rather than the probability.

Overall I suspect that the loss function doesn't matter much - you either find the "structure" or not. However, it is a curiosity that in statistics, setting the model for structure (probability) leaves no freedom in modeling the loss (which is minus the loglikelihood). What happens if I want to fit a Poisson model, but I'm interested in minimizing predictive least square error? Is that impossible? Trivial? Meaningless? I'm not sure.

Reply

gustavolacerda November 19 2010, 16:10:40 UTC
If I were the one asking you to predict these things, I would want your best subjective probability... especially if other agents might come to different conclusions.

<< modeling the loss (which is minus the loglikelihood) >>

I'm not aware of this loss function. But surely you can define other loss functions, no?

What is curious for me is that the proper scoring rule is not unique.

Reply

random_walker November 19 2010, 18:19:58 UTC
I think you are aware of that loss function; after all, an MLE minimizes the (-loglikelihood) loss. Anyway, my point in bringing it up is perhaps a bit involved; we can talk about it IRL.

Since the proper scoring rule is not unique, perhaps it suggests that a subjective probability does not encapsulate all of one's (un)certainty.

I agree, it seems that one would want more than just classification when doing a meta-analysis, but how much more...? I don't like subjective probability, but on the other hand it seems to be useful. Then again there are methods like boosting which do meta-analysis without confidence/subjective probability. On the fourth hand, boosting seems very brittle to noise (http://www.phillong.info/publications/LS10_potential.pdf).

It is interesting.

Reply


suicide_sam_e November 25 2010, 21:16:55 UTC
So, you began the makings of a fantasy chess league? (as fits my current understanding of fantasy football leagues)

Reply

random_walker November 27 2010, 02:03:47 UTC
rather the opposite.

Reply


Leave a comment

Up