"In machine learning, this is [Occam’s razor] often taken to mean that, given two classifiers with the same training error, the simpler of the two will likely have the lowest test error. Purported proofs of this claim appear regularly in the literature, but in fact there are many counter examples to it, and the “no free lunch” theorems imply it
(
Read more... )