Hacker Newsnew | past | comments | ask | show | jobs | submit | georgeutsin's commentslogin

Those skills are typically built through training anyway, but MTurk doesn't seem to have the necessary training pipeline.


How do you deal with respondents who just created their account?


Technical and Demographic user metrics are federated across the platform (we supply tasks to other labor markets online, essentially where bots and farms typically attack) so we leverage priors and shared similarity weights. Behavioral user metrics aren’t the only way to evaluate a respondent


Even with more pay, there aren’t any real repercussions for doing bad work. Sure there are different “tiers” of turkers, but realistically anyone could recreate their account once their rating gets low enough.


The problem of providing quality control is that there are a lot of edge cases; even known “high accuracy” turkers may have bad judgement sometimes, which means that every piece of data needs to be validated anyway, whether it be the researcher themselves or another paid contractor.

My undergrad thesis was to build https://tagbull.com, where we tried to have turkers validate the work of other turkers by breaking up a label into sub tasks, and getting multi-turker consensus on those before moving forward.

The main issue we ran into is that the incentive system is incredibly misaligned with the responsibility that the turkers have. It’s very difficult to build trust, especially with a crowd of people who haven’t signed contracts, and who face virtually no repercussions for doing bad work, whether intentionally or unintentionally.



Looking forward to using this tool! Are there plans to make this open source?


It is based on an existing open source project:

https://github.com/pdfcpu/pdfcpu


Here’s the open source repo: https://github.com/jufabeck2202/localpdfmerger


This sounds like an echo chamber


Yes, to address this I have an FAQ entry for it (https://linklonk.com/about): " Is it a filter bubble? On LinkLonk you pay attention to those who you chose to pay attention to. In a sense, LinkLonk is a filter bubble.

A filter bubble is a problem when a system chooses content to show to you without giving you clear control or an explanation of how it came up with these recommendations.

On LinkLonk the ranking mechanism is transparent and is easy to understand. LinkLonk does not try to guess what you would like. What you see is controlled by your explicit ratings. For example, when you see a recommendation from users, LinkLonk explains what links you have in common with these users. "

Also, I think "echo chamber" (compared to "filter bubble") is often used to describe a dynamic that emerges in groups - when members of a group reinforce who is in and who is out of the group. I think LinkLonk avoids this problem by not having the concept of a group. Every user decides for themselves who they want to hear from. There is no boundary to reinforce - no echo chamber walls to erect.

I don't know for sure whether it would become a harmful echo chamber or a useful tool that helps you find high signal-to-noise information. I'd like to give it a try to find out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: