Jan 26, 2011 13:23
I don't know if any of you are familiar with Amazon's Mechanical Turk program, but basically you perform "human intelligence tasks", like categorizing search terms and picking images that fulfill a prompt or something like that, for small amounts of money. Your "work" can be accepted or rejected depending on quality (rejections are rare unless you're just randomly clicking.)
Sooo I'm doing one that involves deciding whether a posted recipe is "elegant", "simple", "healthy", or "kid-friendly". Each term has a given set of parameters (ie, "simple" should be less than 9 ingredients and 6 steps), and you just mark yes or no. So I get to a dish that is, essentially, ice cream with espresso on top. I mark "no" for kid-friendly...because who in their right mind gives espresso to children? It gets rejected. (They match multiple user's answers, and majority rules.)
Now, technically, under their parameters, the recipe WAS "kid-friendly" because it had less than 9 ingredients and a fast prep time. However, given that these are supposed to be HUMAN INTELLIGENCE TASKS, I used my power of HUMAN INTELLIGENCE to determine that a recipe that is essentially "sugar topped with caffeine" is not kid-friendly. I mean...if all they needed to determine kid-friendliness was brevity, why couldn't they just write a script to judge that?
lame,
rant