2M “subscribed” users to a subreddit is about 10% or 200,000 daily viewers with about 1% of those commenting and 0.1% posting so 2,000 comments and 200 posts per day. Some subs will be more
or less but that’s the magnitudes we’re talking about.
The equivalent situation assuming mods review every comment would be a subreddit with 1 billion users. If mods are only reviewing reports which is some small percentage of comments then adjust accordingly.
Moderation workflow starts with user-reports on posts/comments. At scale, you can set a threshold as well, which dramatically reduces the number of items to review.
Obviously, scaleup the number of reviewers.
Obviously, add some obvious algorithms: warnings, temporary/permanent bans, etc.
...
Very well put, it's not even close to the same ballpark. It would also be like moderating a 1 billion user subreddit where nearly every single post is off topic and comes from different users, whereas in subreddits a few super users often produce a huge percent of the content.
This is a very efficient moderation force. I'd be happy to get away with 10s of thousands per billion. That's still under 100 per million. Under 1 per 10,000.
Try multiplying your numbers by a few thousand, billions of people use lets platforms daily. Reddit user interaction is also very different in regard to celebrities.
Sure, but bias is going to be a factor with any system of evaluation. It happens with human moderators, it happens with automated systems, it happens in corporate performance reviews and it happens among 12 of a defendant’s peers convened for a criminal trial. The existence of bias doesn’t pertain to whether or not a given system is scalable.
It is relevant to the objective existence of a problem though, as well as the magnitude of the importance of the problem (importance may vary substantially per instance of censorship).
My experience:
- yes there inherent bias, accidents etc
- an escalation system can provide double check against petty judgments. Back pressure can be created through the possibility of greater penalties.
- obviously there needs to be bias for credible accounts (older, more active, verified IRL, etc) to avoid throwaways.
Isn't Reddit the counterexample?
A dozen of us moderated a controversial sub with 2M users, no problem.