Sure, but bias is going to be a factor with any system of evaluation. It happens with human moderators, it happens with automated systems, it happens in corporate performance reviews and it happens among 12 of a defendant’s peers convened for a criminal trial. The existence of bias doesn’t pertain to whether or not a given system is scalable.
It is relevant to the objective existence of a problem though, as well as the magnitude of the importance of the problem (importance may vary substantially per instance of censorship).
My experience:
- yes there inherent bias, accidents etc
- an escalation system can provide double check against petty judgments. Back pressure can be created through the possibility of greater penalties.
- obviously there needs to be bias for credible accounts (older, more active, verified IRL, etc) to avoid throwaways.
No problem except for people who may have been censored due to bias and error on behalf of moderators.