Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is one of those studies that presents evidence confirming what many people already know. The majority of the bad content comes from a small number of very toxic and very active users (and bots). This creates the illusion that a large number of people overall are toxic, and only those who are in deep already recognize the truth.

It is also why moderation is so effective. You only have to ban a small number of bad actors to create a rather nice online space.

And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.





It also explains why large platforms can be so toxic. If there were a sport with 1000 players, you would need 100 referees, not 1. At scale, all you can really do is implement algorithmic solutions, which are much coarser and can be seriously frustrating for good-faith creators (e.g. YouTube demonetization)

Arbitrators are good! They can be unfair or get things wrong, but they are absolutely essential. It boggles my mind how we decided we needed to re-learn human governance from scratch when it comes to the internet. Obviously the rules will be different, but arbitrators are practically universal in human institutions.


The stakes are much lower on social media. If a referee makes a bad call then I might lose the game so it's worth paying for sufficient and competent officials. But when I see offensive content on social media I just block it and move on with no harm done. As a user the value of increased governance is virtually zero.

> But when I see offensive content on social media I just block it and move on with no harm done.

You may be in a minority here. Most people when they see harmful content react to it. And that reaction is perceived as engagement which further perpetuates and strengthen the signal.


Content isn't harmful.

CSAM is harmful

Algorithms and perverse incentives are currently boosting that signal, however. Take this story for instance from the Atlantic: https://www.theatlantic.com/ideas/2025/12/american-anti-semi...

"Last week, the Yale Youth Poll released its fall survey, which found that “younger voters are more likely to hold antisemitic views than older voters.” When asked to choose whether Jews have had a positive, neutral, or negative impact on the United States, just 8 percent of respondents said “negative.” But among 18-to-22-year-olds, that number was 18 percent. Twenty-seven percent of 18-to-22-year-olds strongly or somewhat agreed that “Jews in the United States have too much power,” compared with 16 percent overall and just 11 percent of those over 65."

It's easy to get exposed to extreme content on instagram, X, YT and elsewhere. Incendiary content leads to more engagement. The algorithms ain't alright.


Any site with UGC should include posting frequency next to the name of posters, each time they appear on pages. If a post is someone's 500th for that day it provides a lot of valuable context.

Ratio of posts:replies, average message length, and average message energy (negative, combative, inflammatory, etc) provide decent signal and would be nice to see too. Most trolls fall into distinct patterns across those.

I’m of the belief that HN would benefit from showing a user’s up-votes and down-votes, and perhaps even the post that happened within. Also limit down votes per day, or at least make karma points pay for them. There is definitely an “uneven and subjective” distribution of down-votes and it would be healthy to add some transparency.

> I’m of the belief that HN would benefit from showing a user’s up-votes and down-votes

Posting history of any account is already visible. That's already a lot of transparency.

Having individual account vote histories visible won't add any benefit, but will increase ad hominem attacks within discussions, and will also increase hive-mind type voting.

Hacker News is just a discussion forum. This isn't real life. The focus should be on the discussion more than the people-there's not much in the way of profiles. It's weird to ask more of a discussion forum than a government election process which actually does matter.

> Also limit down votes per day

Posts can already only be downvoted for a limited duration. You also can't downvote until you have 500 points.

If you require spending points downvoting, it simply further incentivizes point harvesting or spending all day on HN so you can have more power, which only drives non-genuine activity.

I think personally once you reach 1000 points a star should be displayed instead of the score.


Why wouldn’t vote history help? If I see a DV from a chronic DV’er that has no value. I’m not going to adjust my opinion / behavior because of some asshole. Along the same lines, if I say something less-than-flattering about say Google, but then get DV’ed by some Google fanboy/fangirl… again that feedback has no meaning.

Context matters. Sources matters.

As for incentives… yeah *if you’re a troll-hole it does. But publishing you DV count also exposes you.

There should be more transparency.


A simple display of posts in the last X minutes/days is a great heuristic of authenticity without me having to dig through post histories or feed it into an LLM. LLM integration that I can control-e.g. have options I can enable or disable like "Block posts with logical fallacies and inform the poster of the block without notifying me"-would be awesome.

Looking through post histories manually is time consuming and ultimately wouldn't work if new account creation is free. Once you start spending more time your haters than they spend on you, you've lost.


One of the best things platforms started doing is showing the account country of origin. Telegram started doing this this year using the users phone number country code when they cold DM you. When I see a random DM from my country, I respond. When I see it's from Nigeria, Russia, USA, etc I ignore it.

It's almost 100% effective at highlighting scammers and bots. IMO all social media should show a little flag next to usernames showing where the comment is coming from.


Yes, but as soon as scammers find their current methods ineffective they will swap to VPN and find a way to get "in country" phone numbers.

There is a fundamental problem with large scale anonymous (non-verified) online interaction. Particularly in a system where engagement is valued. Even verified isn't much better if it's large scale and you push for engagement.

There are always outliers in the world. In their community they are well know as outliers and most communities don't have anyone that extreme.

Online every outlier is now your neighbor. And to others that "normalizes" outlier behaviors. It pushes everyone to the poles. Either encouraged by more extreme versions of people like them, or repelled by more extreme versions of people they oppose.

And that's before you get to the intentional propaganda.


In country phone numbers are quite hard to get since they have to be activated with ID. Sure scammers could start using stolen IDs, but that's already a barrier to entry. And you are limited to how many phone numbers you can register this way.

Presumably with further tie ins to government services, one would be able to view all the phone numbers registered in their name to spot fraud and deactivate the numbers they don't own.


It is very much like crime in general. The vast majority of crimes committed each year are by a tiny minority of people. Criminals often have a rap sheet as long as your arm; while a huge percentage of the population has never had a run-in with the law except for a few traffic or parking tickets.

While crime is definitely a major problem, especially in big cities; it only takes a few news stories to convince some people that almost everyone is out to get them.


One danger is that the volume of toxic people does actually create large numbers of actually toxic people. For example when mainstream influencers or politicians endorse racist views even indirectly, it can shift give others permission to start saying the same things. Then that causes the other side to go further to an extreme on their side. And so on.

> this is why for-profit platforms are loathe to properly moderate

They measure the wrong things. Instead of measuring intangibles like project outcomes or user sentiment they measure engagement by time spent on site. It's the Howard Stern show problem on a "hyper scale."

> A social network

Given your points we should probably just properly call them "anti-social networks."


> And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.

How so? It's not like Facebook charges you to post there.


Other users engaging with them drives views, I think is the idea. Without trolls to dunk on or general posts to get mad about, why go on Twitter?

I've always wondered who these people are, like demographically.

We hold (or I do at least) certain stereotypes of what type of person they must be, but I'm sure I'm wrong and it'd be lovely to know how wrong I am.


Paid from lower income countries or Israel's section 8200.

Interesting idea. Any numbers to back it up?

It doesn't take a lot of pee to spoil the soup.

Furthermore, this does well to illustrate how that handful of trolls is eroding away the mutual trust that makes modern civilization function. People get start to get the impression that everybody is awful and act accordingly. If allowed to continue to spiral, the consequences will be dire.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: