Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like your thinking, but there's a middle ground before full automation: when humans are incentivized, one way or another, to provide the biased reviews. This might be via straight-forward employment of people in lower-cost places (e.g. via Mechanical Turk) or other incentives. For example, note how a proportion of Amazon reviews are gamed and unreliable.

At the moment, the only tasks (that I can think of) that come close to the 'time-consuming-enough to not scale, but not quite annoying enough to put off committed individuals' are the various forms of CAPTCHA - which is unsurprising, given that we're discussing a form of Completely Automated Public Turing test to tell Computers and Humans Apart. (And of course, there are CAPTCHA-solving farms.)

But would people invest time in a review system that required them to complete a form of CAPTCHA regularly?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: