Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: More types of flagging?
132 points by pg on Dec 20, 2009 | hide | past | favorite | 166 comments
Right now to flag something is to recommend that it be killed. I'm thinking of adding additional types of flags that are more specific, and less draconian. The goal is to create things between a flag and a downvote, or maybe even to replace downvotes in some cases. The two most obvious options are to allow comments to be flagged as uncivil, and frivolous. The numbers of these flags (but not who issued them) would be displayed to whoever posted the comment in question. So this would be a way to tell someone to stop being a jerk, or posting dumb stuff, without having to write a comment to say so.

Any opinions about the types of flags there should be, and how they should work?



The only oddity for a feature like that is that it's a bit like "judging" their comment.

I don't know how to explain it, but I'd almost rather the flag options be worded as "Sorry, this comment seems: uncivil" or "Sorry, this comment seems: off-topic" to lighten it a bit. If they were a friend in real life, I'd take them aside and say "hey man, that comment was great, but you might want to soften the tone. It kind of came across as rude."

Downvotes end up being seen as judgments of your ideas, and I would love to be able to provide better feedback. This would be great if this could somehow give the user a nudge that they wouldn't feel defensive about.

It may seem like a minor point, but I feel a little guidance would go a long way in helping users get the sort of social cues they already get in a real-life conversation.

Edit: Instead of calling it out as flagging, I'd be okay with a downvote system that simply/optionally let you specify a reason to provide (with 3-4 common nicely-worded options). I don't care about upvote reasons at all.


What if the incivility flag actually included a message, so you could literally say what you're suggesting? What if it was, in effect, an anonymous comment that had no reply link and was only visible to the submitter of the parent comment?


I don't mind having the option to click one of a small number of specific tiny phrases describing my downvote: "spam", "offtopic", "bad title", "dupe". But free-form responses? I want to subtly (or unsubtly) correct people, not start up a relationship with them. Indeed, given that a lot of bad posts are made by trolls, a relationship longer than one mouse click is the one thing you don't want to start.

I don't want to waste valuable time figuring out exactly how sure I am that a particular title is bad, or how polite I should be, or whether or not I should take advantage of this high-bandwidth feedback mechanism to start some kind of conversation.

Learn the design lesson of Twitter: More degrees of freedom doesn't necessarily help.


I think free-form response might be a bad idea as well. One of the main problems is people being mean-spirited - now we're going to give these same people the option to anonymously write whatever they want to someone, without even the (modest) fear of being down-voted for their comment?

I think this is a GREAT idea if it uses pre-selected phrases to convey specific / useful info to the author.


But there wouldn't be a relationship, since the flags would be anonymous, and I presume filling in a comment would not be necessary for flagging.

Given those two prerequisites, I think it's a great idea.


I like this 'private feedback' idea for more than just incivility: it would work well for tiny correction suggestions that the author could helpfully use (esp. in the first 2 re-edit hours) but that would otherwise pollute the thread with minutiae.


That would work great for my part-- I'd assume moderators/admins could see it, too.

I'd hate for people to abuse the anonymous nature of it sometimes and be rude-- we haven't had any "private" messaging so far on HN, so this would open a little of that world for moderators to think about.


I'd assume moderators/admins could see it, too.

Yes, it would definitely be a good idea, if private comments attached to flags are implemented, to say explicitly that moderators/curators are allowed to view all of them.


I like the idea of small <150 character anonymous comment along side the flag.

A few other thoughts:

Would it be helpful have 1 word positive and negative flags like: offtopic, uncivil, helpful, insightful, inspiring, trolling, etc...? That way, positive as well as negative behavior could be reinforced.

Would it be helpful to have a delay in how soon the flag shows up. So much online discussion get's aggravated people get angry at getting downvoted, flagged, etc... and then they quickly flame back. If there was a delay in when the flags showed up, would it decrease people's tendency to post "why did I just get downmodded" an hour later.


That is a really good idea, but I suggest making sure that the commenter gets notified of that private reply (instead of having to check it manually).


I don't much like the idea of private messaging, but I really don't want to be explicitly notified of anything that has been typed by someone who I have not specifically designated as a friend.

I have enough spam and distraction in my life.


If I did this, I'd make it work like comments work now. In fact, these messages would literally be comments, with some extra flag set.


I take that these comments would only appear to the poster? This means new users won't see these moderating comments / suggestions, hence may miss out on this education until it happens to them. How hard would it be to add a flag to each user's profile so that they will see these comments until their karma reaches a certain level (50?) and then they get the option to turn them off.


How about automatically hiding all child messages from the original post's page but show them on the message's page itself. As a naive observer trying to learn the norms, one could click on the message and see dissent.


Replacing downvotes with a flagging system or similar might be a good idea: I've been noticing that people tend to downvote other users not because their comments are "uncivil" or "frivolous", but because they express an opinion different from theirs.

I've seen many perfectly-civilized comments being downvoted, presumably, because they are controversial, though I have seen others that deserve downvotes because of their nature.

The way I see it, replacing downvotes with a comment flagging system would probably reduce the amount rude comments because they'd probably be killed soon enough. However, it eliminates the penalty of being rude (a drop in karma), and some visible way of penalization must be put in place for it to be effective in deterring users from being rude. A work around might be that for every killed comment (due to numerous flags) a user gets -5 or -10 karma.

Things would behave differently with posts, but these are just my first thoughts (when it comes to commenting).


I've been noticing that people tend to downvote other users not because their comments are "uncivil" or "frivolous", but because they express an opinion different from theirs.

I bookmarked this HN comment

http://news.ycombinator.com/item?id=117171

a while ago, because it expressed the view of pg in an earlier era of HN that "it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness."

But it's interesting to see pg opening up today's discussion of new, more nuanced, flags, and that would change the core meaning of a downvote. I'm all for learning a new system of flagging and voting if that helps encourage a thoughtful community.


I sometimes downvote things that seem mistaken. I think most users do. This didn't use to be a problem when there were only a couple thousand users who often knew one another personally. The problem is voting to agree/disagree combined with a larger and probably less thoughtful audience. That's when it starts to feel like a mob.


Using downvotes as a way of expressing disagreement is, I believe, unfair to a commenter if he/she is courteously stating his/her point of view or idea, mainly because downvoting inflicts a penalty on the user's karma - which many users, I surmise, care about.

I don't think users should be penalized for siding with one aspect of an discussion just because other users disagree and proceed to downvote based on their own personal opinion. It's not even the best way of determining majority (which I believe is, partly, the purpose of the system), since not every user makes use of the feature.

Some sort of hybrid might be the most comprehensive solution. Probably something along the lines of upvotes raise karma (compensating good input), downvotes don't affect karma (their purpose becomes, then, to provide insight on the general opinion of a subject), and flags affect karma negatively for rude or uncivilized comments.


Strictly speaking a vote does affect someone's karma. But it doesn't cause their karma to be net decreased as a result of posting a comment unless you downvote the comment below 1, which most people don't do lightly. I think of the up and downvoting as a collaboration to determine how much additional karma someone should get for posting a new comment.


which most people don't do lightly

Perhaps this is no longer true?

Regardless, the fact the downvoting is just as easy as upvoting (and has no undo) strikes me as odd.

I suggest downvoting cost the voter something, not just the votee. 1:1 seems simple enough. It would also give greater pause to newer users than veterans. However, an undo (perhaps timed like comment editing) would be more desirable in this case.


I agree, it should cost the voter 1:1 to downvote a comment, but only one that's already below 0.


So the original purpose of binding comment scores to karma was to _supplement_ the overall karma of a user? Not to be confounded as one of the core ideas behind the whole system?

Regardless of the original purpose, I think that fomenting conversation should be one of the most prominent ideas behind the whole karma deal. The comments and the conversations are what make HN different (and, of course, demographics, which consequently means that the comments of this given demographic, self-targeted, is what makes HN great), and I know many concur with that statement.


Yes to the first question: the better comments you make, the more points you get. I'm not sure what you mean by the second.


What I mean by the second part is that comment shouldn't be a supplement of score, but one of the most prominent factors affecting it.


My voting behavior is pretty closely related to the comment score: if I think the score is "right", I usually leave it alone, but if it's too high I'll downvote for disagreement. I think there's a big difference downvoting a comment at >2 and downvoting a comment at 1 or below.

I also routinely upvote comments I disagree with but find well-reasoned nonetheless. Especially if they are unusually difficult for me to refute.


There is clear statistical evidence that many (perhaps most) users view scores this way. On the whole that is probably good, because it tends to prevent scores from going to extremes in either direction.


I wonder if there's a statistical test for mob voting... a mob vote is a vote reinforcing an already extreme score, a mob voter is someone whose votes are usually mob votes?


How about having tags, and then letting the community dynamically decide which ones are standard? I think this would work because the community is fairly observant of the rules. Plus you could shape behavior by suggesting certain tags when the user wants to add a new tag to a submission. On top of covering the use cases you detail, it would:

1) Help with search if it is ever implemented

2) Allow users to quickly disseminate the subject of articles on the front page without ever opening them (especially helpful for ambiguous titles)

3) Maybe in the future allow users to subscribe/unsubscribe to certain tags if the front page starts getting too busy


Hmm, interesting. The problem though is that these tags are supposed to be private, so there would be no force to make customs converge.


Have good tags (common, not blacklisted, thresholded) not be private and pre-seed list of tags. Have a karma threshold to see tags? Would talking about an article's tag be verboten in the article's comment, or just deserve a 'meta' tag? I could see an article's comments being a force for custom convergence.


I think m0th87 was suggesting general, public tagging, which always sounds like a good idea but is really hard to implement well.


The number one kind of alternate flag I'd like is 'review headline' rather than 'kill'. There are many stories that would be OK, if they didn't have a misleading headline.


Or "review link" for that matter. It isn't all that uncommon to have an interesting story that links to a page that is just repeating content from a better and more complete source.


The solution is just to flag these and post the better link since you can't change the link after the fact which makes sense. I'm sometimes guilty of this though sometimes I do think there is editorial content which adds to the original story.


Moderators can update the headline and link; fixing in place can be better than another submission on the same topic, because it aggregates upvotes and conversation in a single place.


Given how rarely it happens (though it does happen) and the volume of links, I suspect there just aren't enough moderators. It might therefore be more ideal to give the community the tools to enforce.


Okay, so why not have something that submits "better link and headline" and that can be voted up? Pass a threshold and the original headline and link get swapped out.


I haven't really found "misleading headlines" to be much of a problem. I don't really see a correlation between adjusting article titles and improving the discourse on HN.


I think you should add this as an option, and only for the karmarific users. This way, less than 10% of the site would actually use this, and since these are likely to be the oldest and community invested users, the flagging would always lag behind the movement of the site userbase, leading to a stabilizing force in community feel.


This concept needs more exploration. The delay/lag seems very promising on cursorary inspection.


How about letting those with big karma attach tags to their downvote? Others could agree by downvoting the tag. This allows a slightly more organic approach where the best tags emerge over time. Maybe anyone over (lower) karma can propose a tag but the tag only applies if a few agree on it.


I prefer to use my brain for absorbing news here, not for mastering popularity games. With more ranking options, HN would start to look like work.


For handling individual comments, I think the better system would be to have two-dimensional comment rating. One dimension, upvote/downvote, is "valuable contribution/not-valuable contribution". A second dimension, agree/disagree, allows expressing support or dissent without the connotation of reward/censure that upvote/downvote has.

Agree/disagree would only be tallied inline at the comment for reference -- there's no persistent reward for simply saying things many people agree with, nor penalty for saying unpopular things.

Then, it's OK for downvotes to serve the role this "more flags" idea does. Downvotes then unambiguously mean: uncivil; frivolous; factually wrong; repetitive; unwanted. (And, moderators could focus on highly-downvoted comments as much as 'flags'.)

(Previous comment, with more backlinks, on this idea: http://news.ycombinator.com/item?id=721853 )

Another related idea: never show a comment at 'maximum negative' score to a person who hasn't yet voted. Show it as '-3' instead (no matter how many net downvotes it's received). Then, there's always a motivation for adding your own independent judgement. (Once they vote, the true-but-truncated-to-range score can be shown.)

The poster could see the 'true' score, so they know if they've really touched a nerve.


I dislike agree/disagree voting overall. We all dislike groupthink, and we all appreciate constructive informative posts, and we all like differing viewpoints. So why have a moderation system that records or values agreement/disagreement at all?

I think gojomo is 100% right that people should be rewarded for constructive informative civil contributions, rather than because the moderator happens to agree with you.

An agree/disagree arrow could be left simply as a honeypot, so that people don't use the other descriptive ratings as a substitute for agreement/disagreement.


My theory is that a separate agree/disagree fights groupthink because it moves popularity/agreement out of the reward/penalty dimension.

People want to register their opinion -- and sometimes that opinion is just yes/no. It's good for a site to offer a low-effort, low-visual-pollution way to capture that -- single click votes/favorites/likes work well for that. (It's better than lots of 'me too' or 'I disagree' or 'my thoughts exactly' micro-comments.)

But, if those signals are mixed with a sense of righteousness/transgression -- which is inevitable with leaderboards and display rules whereby 'high-rated' comments move up, and 'low-rated' comments fade from view -- then people may withhold unpopular but important viewpoints, or be tempted to race to be the first to post a banal but crowd-pleasing viewpoint.

My theory could be wrong. Some people might care so much about agreement that they obsess over that score, and knowing exactly how many people agree/disagree would then cause even more synchronization-of-publicly-expressed-views. But I think this crowd is sophisticated enough to draw (and make use of) the distinction between a bad comment and a controversial minority viewpoint.

And I would like to be in a place where someone who advances a controversial minority viewpoint, but does so in an articulate, thought-provoking, civil manner, could be on the leaderboard even if every individual post of theirs has more net disagreement than agreement.

(Which brings up a related point: it would be interesting to report an agree/disagree axis as two totals, not just the net difference. '101 agree, 100 disagree' is more meaningful than a net score of '+1'. A sparkline bar graph or tick-series could work really well for this, though it might not need to appear on every comment, or appear until requested.)


Ranking +101/-100 comments higher than +1/-0 comments is something that they implemented at reddit a couple months ago. They use an algorithm called a "Wilson Score confidence interval for a Bernoulli parameter."

Here are some relevant links:

http://blog.reddit.com/2009/10/reddits-new-comment-sorting-s...

http://www.evanmiller.org/how-not-to-sort-by-average-rating....

http://blog.linkibol.com/post/How-to-Build-a-Popularity-Algo...

Digg also has a way to rank comments by "controversy", so the comments that are most contested by up and down votes are placed at the top: http://about.digg.com/blog/new-comments-system-released


The problem is that not everyone thinks as you, and will vote based on agreement/disagreement. We need a clear separation of these types of votes.


Agreed. I saw I got minimally downvoted for what I thought was a comment that really added to the conversation. It's disheartening to be downvoted simply because somebody doesn't agree with you.


That happens all the time. Don't fret about it, just ignore it and apply your votes the way you think things should work.

Over time those few downvotes are almost always balanced by the rest of HN.

Except for 'controversial' subjects, there anything can happen.

If you want to avoid being downvoted completely the easiest way is to avoid such subjects, specifically anything mentioning Ayn Rand, the glory of Capitalism, religion and emacs ;)


as opposed to the groupthink that comes with the standard one dimensional moderation scheme?


How about the voting staying as it is, but a facebook-esque "Like" thing put on the comments. This way, people can "like" a comment without having to resort to voting. Their intepretation of voting stays the same.

People are familiar with the "like" concept, so such a move would be intuitive to a lot of users. It would also ease the difficulty in understanding the voting system.


How would such a 'like' be different from an upvote?


A "like" represents agreement. An upvote represents an informative contribution that should be highlighted...in my opinion.


I don't think of 'like' and 'agree' as synonyms -- in fact to me 'like' more closely means 'informative contribution that should be highlighted' than 'agree'. So I support the idea of 1-click expressions but think they need to be labeled/presented carefully to encourage desired meanings.


i also agree with this idea, as i also brought it up about 6 months ago (relevant link: http://news.ycombinator.com/item?id=651295).


If you're going to have flagging options on comments for uncivil or frivolous comments, you should probably raise the moderation floor to -1 or 0, because it doesn't seem fair to let people be downvoted below that point for disagreement alone.


Yes, that is a related problem. The floor used to be -8, but as the number of users increased (and maybe the median user got meaner) there started to be lots of comparatively earnest posts that got downvoted to -8. So I raised the cutoff to -4, but that still seems unsatisfactory. Voting still feels like it's being done by a mob.

I tried not displaying points on comments a while ago, in order to solve this problem, but users complained that this made HN harder to read because they couldn't pick out the good comments. I only gave it two days' trial though. Maybe I should have given it more time. Or maybe there is some other solution.

Anyone have any ideas?


The feedback of the voting system is still essential, it's just that you can't really trust the voters anymore. It's gotten ridiculous to watch the votes swing from extreme to extreme on one comment based on the follow-up discussion where people openly advocate for more or less points for the OP.

The only way to fix this I can see is either some form of meta-moderation where you hold people accountable for their votes (ala slashdot), or the system I'd personally rather see, where rather than a full-democracy the users pick whose votes they see on comments. I'd have my pool of 20-30 users whom I trust and I could see their scores vs the throng's.

Just as you hand-picked some users as moderators who you feel represent what you want HN to be, let us pick who we want to serve as our content filters.


I agree with you about the voting swings. I think that, more than the extreme numbers, may be why voting is starting to seem such a mob thing lately.

It's a very interesting idea to have users see only the votes of other users they choose. One problem though is that it could easily lead to leaking who voted for what, which would make a lot of users (including me) uncomfortable. E.g. your pool is your 7 coworkers, and user x. One day you know all your coworkers are on a plane, and that you're therefore seeing user x's votes. Another problem is that it could be a lot more expensive to generate pages.


You could fix the deanonymization problem this way: when you pick a trusted user, you can see not only his votes, but all his trusted users' votes, too. Maybe weigh the votes differently depending on how "distant" they are from you.


This is a great idea.

My original thinking was to suggest a combination of enforcing a minimum number of trusted users and then reporting a weighted average of your trusted users + the mass' rating but your solution is conceptually nicer since it adds the same noise to the vote but while extending the idea of valuing a particular user's judgement.

The big technical issue will be computing these "trusted ratings" for every comment. The trust graph could be done asynchronously so that wouldn't be an issue.


Hmm. Good idea.


During the experiment of not displaying points on comments, was the voting less volatile, with smaller voting swings? If it was, then hiding the points until a few minutes or hours after the comment is posted (i.e. until the volatility becomes reasonable) could replicate the effect. If not, then maybe the voting swings are inherent as a statistical property of any sufficiently large community, in which case you may want to display a confidence interval of the voting score instead of the score itself.


A time-delay on down-votes perhaps? More of a psychological effect than a technical solution.

Not showing any points above 5 (or 10 or whatever), while still having an internal counter might also help.

A trust system maybe. "I trust this user. So anything he put down as interesting I want marked as interesting."


It's a bandwagon problem. When people see a large number (positive or negative), they want to push it larger. To snap people out of this heuristic and make them actually judge the comment, I'd suggest making people do a small amount of "work" if the number is already large in either direction (and they're trying to push it larger.) Perhaps make them enter a CAPTCHA of lg(abs(score)) characters?


You could try not acknowledging downvotes on a new item until the number of downvotes reaches a threshold. For example the first downvote on an item would have no impact on the score or the placement of the item on the page. If the threshold is 2 then, the item would be affected after 2 downvotes. At that point the normal result for downvoting would occur. This would require two people to independently interpret the quality of an item. Though, I am only considering the case of an item being only downvoted.

Another idea I have is to require a reason for the downvote. Reasons could include: disagreement, uncivil, or offtopic. The score could be affected according to the reason. This could make downvoting more thoughtful and scores could be impacted less for disagreement.

Last idea: Instead of hiding all scores. Only hide the score for items with a score less than or equal to 1. These scores have less relevance and I think people will be less critical of comments if they don't see a negative score attached.


Allow people to turn on seeing the points if they want to, but give more weight to votes by people who aren't choosing to see the points?


Do comments follow the same list algorithm as news posts, using time & points? This seems to make sense for news which is contextualized with time, but comments... why not remove the point display on them, but have them still listed by highest (invisible) point totals.

People would really need to evaluate each comment accordingly, knowing that each vote could potentially change a parents position on the page. This could entice people to use votes to steer the conversation towards the intellectually gratifying that they want further discussed, as opposed to just things they casually 'agree' with.


Voting still feels like it's being done by a mob.

There will always be a majority opinion, which gets expressed through votes. As HN gets above a certain threshold of active users, the majority opinion will be indistinguishable from 'mob voting', unless users can be trained to not vote when the vote is already high in 'their' direction.

I don't think a 'honey pot' agree/disagree voting axis will work, unless it is visible to everyone. Otherwise, these users will express their opinion with the current up/down votes.


Have you thought about showing the aggregate number of flags too? This might be interesting if you have multiple types of flags.

Another possibility is to make the flag threshold higher but make the flags non anonymous(That might create a hostile environment though, e.g. tit-for-tat retaliation but I would hope this environment is mature enough for a frank discussion.)


Reddit's solution of hiding a thread once the top post went below a certain threshold seems to work pretty well. Then you have to go through two actions to downvote.


The comment flags is a great idea - there's a good chance that it would cut down on "why am I being downvoted" types of posts, which would in turn reduce noise.

Are you thinking of adding more flags to stories too?


I should probably add a way to flag a story specifically as a dupe, and maybe also for having had the title reworded too egregiously. Those are common problems people currently have to use comments for now. Can anyone think of others?


I pretty much only flag posts for:

dupe terrible-title spam-site not-even-arguably-hacker-related


Flag as meta? A lot of people don't care for the meta-discussion.


I would really like to see this. I'm inclined say it shouldn't have any relation to karma or killing, because there's nothing inherently wrong with meta-discussion, but the front page would be infinitely more pleasant to me if I could see "100 comments (90 meta)", instead of guessing if there were really 100 things to say about the article or if after the only 10 someone just hit a nerve with a downvote or a pointless "this isn't HN" snark and started something big and ugly.

I'd go so far as including a "meta" checkbox next to the reply button, defaulting to the parent's value, which it would be considered polite to set truthfully, but maybe that's just me.


Along the same lines, I'd like to see a throttle that limits how many stories a user can submit from the same domain.

There are a handful of users that seem to only submit from 1 or 2 sites that (I presume) they are affiliated with. This is just more noise that makes skimming the /new page and promoting good stories less efficient than it could be.

Regarding titles... What about a title/subtitle option? Keep whatever title the actual article has, and a slightly smaller subtitle where the submitter can editorialize.


I dunno. I tend to submit from the same domains simply because there aren't very many submitting from those domains, and the can have interesting articles. On the other hand, I don't submit very often, so maybe some sort of statistical time check would work.


Looking at your submissions, there seems to be enough time between submissions and variety that I don't think you'd get "caught" by what I am proposing.


I wasn't just thinking of me.


I flag occasionally for spam or off-topic HN-wise. I have been tempted for those that hijack other stories without much of anything in the way of commentary, but HN'ers tend to quickly post a link to the original.


I agree comments are a great idea. But I don't think all comments are necessarily negative. I can think of times where I wanted to comment directly to the poster privately. So maybe there is an even more generalized solution for providing feedback/responses.

Flags are a good idea too. The HN feature set is still minimal and polished compared to other similar sites. So the community is really sensitive to any changes. From seeing previous feature experiments I think the only way to really see if it works is to implement it and get feedback. I remember some previous features that seemed like they would be great(!) when they were written up and first implemented. Then after a few hours or days were frustrating. Like the orange vs greys karma change.


I think this is a good idea and long overdue.

Ideally the system will train new users. It has been difficult in the past for me to figure out exactly what downvotes meant.

Over the last few months I'm seeing a rise in ad-hominem attacks. No matter what the subject of the article, technology, startups, history, whatever -- it seems like within minutes somebody has called the author's reputation or the news source into question. Sometimes the information is pertinent, but most always it's just an effort to look superior and/or take easy shots at people. If I remember correctly, we've even had posts about "check out my startup" where the founders were trashed, not to mention the nastiness with Dennis a while back.

I don't think downvoting is sinking in with the people who are doing this. As much as most people don't understand ad-hominem attacks, it'd still be good to have that flag, because it's the same old discussion every time.

EDIT: I meant Dustin, not Dennis. Sorry about that Dustin.


Yeah, I've noticed a decline in the tone of comments too. The site seems to be getting nastier. I occasionally find myself thinking lately that I just don't want to be here anymore. But I am determined not to let HN go down without a fight.

If anyone has any more general suggestions for solving the nastiness problem, besides these new types of flag, I'm all ears.


I occasionally find myself thinking lately that I just don't want to be here anymore.

I had no idea. AFAIC, this is the best site on the internet. It is the home many of us have been seeking for years. I talk about things here that I have no one to talk to in person and I imagine there are many others like me.

I have also noticed that things run in cycles and we seem to be in a bit of a trough lately. My usual response has been either to submit lots of stuff I really like, or just quit and come back the next day. That often works wonders.

Whether we like it or not, it is the nature of this site for people to show others how smart they are. That's just the way we are and this is the perfect place to "show off" (for a lot of us, it may be the only place). Many of us never really fit in, didn't play sports, weren't "cool", and didn't get noticed by the ladies as much as we would have liked. But we had passion for other things, exactly the things discussed here at hn.

So the nerdosterone runs heavy at times. Accompanied unfortunately by occasional meanness.

But I am determined not to let HN go down without a fight.

You won't need a fight. Just a little of what we do best: some good old fashioned problem solving.

A couple of ideas:

- Stop the meanness before it hits the data base with a front end "meanness" filter. When you click "reply", scan the text for cuss words and frequent phrases and give a warning, "Your comment has triggered the meanness filter because of the phrase "xxx". Please take this opportunity to reword it. Say what you want, but please be civil." They could modify it or submit the original anyway, but have to wait a "cooloff" x seconds or minutes. This could easily be gamed, but that's not the point. It would actively establish the expected tone, especially for new people.

- Weigh votes by karma. The top 100 get 3 points per vote. The next 1000 get 2 points per votes. Everyone else gets the usual one point per vote. The idea is that those who have the most invested in the community would be more able to slow down out-of-control group think.


Unfortunately I'm pretty sure mean spiritedness is too semantic to catch with a filter. Dumb jerks are not the problem. The type who curse a lot get downmodded into oblivion and eventually banned. The problem is smart jerks. And unfortunately these are not uncommon in our world.


Maybe a 'jerk' flag? Something that wouldn't effect individual comments, but would aggregate. A user with a high number of jerk flags could be called into scrutiny and possibly be dealt with if he was abusing the site.

Another thought... What if we could filter out users who's comment we didn't want to read or we felt were offensive or a waste of time? This would reduce the noise for indivudual users without anything so draconian or subjective as flagging plus moderation.

Another less-pleasent thought is more aggressive banning policies. I remember being part of older BB-esque forums that managed to maintain strong communities by enlisting members as moderators and enforcing strict and clearly promulgated discussion policies. It worked because it was transparent and because the enforcers were active members of the community. They had reputations and interest in keeping the quality of discussions high and are bound by the same rules that apply to everyone else. If need be, their rullings could be appealed if they were abusing their authority. I always felt that that system worked.


Create an expanded group of users whose judgement you trust. If they flag a comment for being uncivil, then everyone who up-voted the comment has their voting power reduced (from then on, if their up-vote would count less than everyone else's). I think this would help with the problem of highly up-voted, mean, comments.


generally, I agree, but I do find, for example, that as programming.reddit.com got worse the swearing went up, and as HN has gotten worse swearing has gone up. Now the swearing is not always mean-spearited, and probably the mean-spirited swearing gets down-modded on HN. But swearing could just trigger a filter that says "swear words are used to get attention in important situations. Do you really think this is that important?".

Even if you can find some effective filters to use, most may eventually be worked around by commenters as you are implying. But they could give you more breathing room to find long-term solutions, and may help in that effort by increasing your awareness of what exactly the problems are.


AFAIC, this is the best site on the internet. It is the home many of us have been seeking for years. I talk about things here that I have no one to talk to in person and I imagine there are many others like me.

Hear. Hear. HN is also my favorite general access site. (I am a member of an organization that has members-only email lists that are very, very good, but I still find things here that I share with them.)

The guidelines mention "anything that gratifies one's intellectual curiosity" as being on topic here, and I am deeply curious, having been an online moderator in various places since 1992, about how to keep an open online community civil and informative. My friendly advice to pg is to stay in the fight and don't give up. It may take a heavier hand in moderation of both comments and persons, aided by some of the crowdsourcing technical means being discussed in this thread, but HN has participants who cherish what HN is and who will be glad to help.


Would you consider forking the entire site into multiple HNs? e.g. where one site encourages civility and another encourages nastiness?

Along a similar idea, I once started work (and since abandoned it) on a social news site where the idea was to auto-partition all the users into Dunbar's Number-sized buckets based on their commenting style. You could still see everything if you wanted, but you'd be placed into a "swarm" of people whom you'd get to know better than general population. So assholes would end up in asshole swarms, spammers in spam swarms, etc.


What did it mean in practice to be in a swarm? What could you do with people in your swarm that you couldn't do with people in other swarms?


The first year or two on HN, when the community was smaller and tighter knit, I found that when I recognized other usernames, I tended to feel an emotional attachment to them. The userbase was smaller, and thus, easier to say, "Oh, it's nostrademons. I like him. I'll reply to his comment". I felt that familiarity facilitated a sense or cordiality and camaraderie in our discussions. I think that was largely a function of group size.

I've lost that sense of camaraderie here in the past year. I find myself contributing less to the discussions, and I certainly don't submit near as much as I used to. I still lurk a lot, but I don't recognize a lot of the current users and I have no sense of personal attachement to most people here aside from the friends I've made over the last couple of years here. I imagine it's the same with a lot of other users.

If users could somehow be herded or segmented into smaller groups, that the sense of familiarity, community, and camaraderie might be able to continue in the discussions while a site scales. I don't know if it's necessarily a function of what limits, permissions or abilities that would be placed on individuals within the smaller groups, however. In meat space, an individual's actions and interactions change dramatically depending on group size. I suspect that similar forces work online, too.

The question is, how do you segment the community into smaller groups or swarms? I actually think that this can be acheived using recommendations. I have a hypothesis that I want to test. My hypothesis is that if reddit had succeeded in their recommendation engine, there would not have been a need for Hacker News. Why? Because those of us early reddit readers who were interested in hacking and founding would have remained somewhat clustered together automatically via article recommendations. We wouldn't have gotten irritated at the trolls and left. We would still be reading and commenting together on similar articles. All the while, the site could grow, and the users that were heavily intersted in politics, religion/atheism, zombies, etc... would have been ushered together via common recommendations as well. I'm looking forward to testing this hypothesis with http://newsley.com if and when I can get enough users to start doing recommendations.


> when I recognized other usernames, I tended to feel an emotional attachment to them

I think this is the source of all politeness/nastiness (both here on HN and in real life). It's hard to be gratuitously rude to someone you know personally... it's hard to even be passive aggressive.

Now that HN is bigger, it is by default more anonymous. It is less a group of friends sharing links as it is a public forum. Where insulting someone used to be equivalent to saying it to their face, it's now more akin to heckling from the stands.

I too have been coming here less and less. I came to HN for the discussion... the links were only secondary to me. Often the best discussions came out of mediocre links. But now I feel like all I see is catfights between minor programming stars, people debating whether relational databases are really dead, and the latest scandal involving rackspace/twitter/zygna/facebook. All of that stuff is destructive. It's all just churning the muck of the internet... and more importantly, doesn't make you a better person for having read it.

We used to laugh whenever the front page would fill with Erlang articles. Even when things were 'that bad' ... they weren't really that bad because the discussion was still insightful. I'm really not trying to sound like one of those 'back in the good old days' people, but I'm starting to become skeptical about the value of coming here.


On another thread, I thought it might be a good idea to keep around who voted for what. You use a recommendations system (perhaps from the Netflix contest) to determine how much you will like other comments, and then filter based on that. This might result in something similar to swarms; the idea is that your "swarm," the group of people where you like their comments and you like theirs, is the only source of intelligent conversation.


We intended to make online communities (swarms) work more like real-life communities.

A user could belong to multiple swarms. People who don't feel like they belong in a swarm could leave (or be forcibly voted out by the group) and join another. When swarms get too big (above, say 150-200) they could splinter into smaller ones.

In practice, each swarm would have their own stories and comment threads. e.g. Swarm 'A' hates TechCrunch stories and always votes them down, Swarm 'B' feels that the people in Swarm 'A' are uncivil, Swarm 'C' just loves to make jokes about everything, Swarm 'D' is extremely priggish and has very strict rules about what types of stories should be posted.

A user could choose to experience the raw unfiltered site, or just stick to swarms where he or she mostly knows everyone else. We intended to use voting and tagging statistics to suggest swarms in which an unattached user might feel like they belong.

Dunbar's Number (http://en.wikipedia.org/wiki/Dunbar%27s_number) was the inspiration.


The first thing to determine: are there "nasty users", or simply nasty posts, which could be posted by otherwise-kind people?

Treating nasty users is a simple and well-known problem: find out the user's disposition before you release their commenting privilege. Usually you'd put all the user's writings into a moderation queue, and only post them up (and retroactively make those moderated posts appear) if their average moderation score exceeded some threshold. The moderation could be done by users, of course; in fact, it could simply be part of the normal comment-thread interface, with an icon beside the post to represent that voting on it affects whether the user (and their posts) stay or go.

Treating nasty posts by good users might be similar to a problem of catching illicit activity on a bank account—simply asking the user "Did you really mean to post this? It doesn't seem like something you'd say." It might also be similar to a social bookmarking matchmaking service—except that instead of putting people with high pairwise right-handed Whuffie together, you'd be keeping people with low pairwise left-handed Whuffie apart, by somehow making it harder for them to confront one another.


I think a "Did you mean this?" flag would be useful. Sometimes I find I make a post that could use a little rewording that would make it clearer or less confrontational. I am prone to calling OP's crap (not comments). Perhaps I should apply the moose turd pie principle: Crap...but tasty.


Good thought. I bet many of the more nasty-prone posters are less likely to revisit their posts to see how they were received, and to consider editing or adding to their posts if flagged as too snarky (or whatever).

Having a way to offer some means of adjustment would help distinguish between the accidental and willful nastiness.


YES


It would be good to have hide/show thread buttons like on reddit. That way if you see a useless thread carrying on, you can just click [-] and move on to the next comment. This isn't a direct solution to the problem at hand, really, but I think it would help, as comment sections are getting longer and a bit harder to follow and sift for the good stuff.


Show/hide functionality would be awesome.

I actually strongly considered peeking at the html and implementing the necessary JavaScript myself - I wonder if a contribution of this type would be accepted?


If I were creating my own ratings for my own personal view of HN, I'd like to discount votes of people who vote opposite of me: if I downvote a comment that they upvote, then I'd prefer to take their other opinions less into account.

Or, since I'm not actually all that active on HN, I'd prefer to be seeing a view that weighted more heavily votes of people who were not voting opposite to those of people whose opinion I respect, such as pg. And probably some others that I'd be aware of, if I were on HN more.

For me to take the next step and suggest that this then becomes the default view that everyone sees seems strangely egotistical. Not for me, I mean, but that you might seem egotistical if you were giving more weight to votes from people who voted similarly to how you voted. I'd probably like the site more if you did that though :)


That is a very interesting way to use voting to customise your viewpoint. Perhaps click-through to articles or news could also be a factor, those people who click through to the same articles as you tend to have more effect on the pages you see.

You'd end up with a graph representation where each user is a node and two nodes are connected if they share a similar voting relationship.

Everything that you can vote on keeps a list of the people who upvoted it vs those who downvoted it. Every day the list of voters are cleared (to reduce load), and those people who all voted the same have their relationships to each other strengthened. The submitter of the item you voted on has their relationship with the voter increased by a larger amount still.

You then take your relationship to the submitter or commenter into account when comments are ordered for you, perhaps via the shades of grey styling HN has at present.


Rather than resort to more sophistication in terms of voting and/or flagging systems, I am inclined to think the best hack would be a social one: Make the link to the guidelines more prominent. Why not place it in the bar at the top of every page, instead of in a tiny font at the bottom?


Indeed, why not make bopping by the guidelines a threshold requirement for a first comment, and put the link on every comment submission form?


Honestly, I think people don't realize that nastiness makes their comment less effective. Most people don't take you seriously if you insult them, even if the insult is followed by a valid point.

How about something like this.... Create a set of general rules to follow for making an effective/polite post. Things like "calling someone a name brings more attention to the insult than your point." When you have a dozen of these or so, put a random one next to the "reply" button. This way people are constantly reminded of how to create polite posts.


Another thought if you're still thinking about this and monitoring this discussion - give people a limited number of "show stopper" flags intended to draw the attention of moderators. They only get say 1 or 2 over the course of a month - much like dating sites to indicate importance (e.g. plentyoffish with their 2 roses).

These could be used either to reward or punish depending on how they're designed but people would use them sparingly if they're given after a certain criteria is reached (e.g. age of account, karma points, etc.). The purpose would be to cut down the noise that moderators/you have to deal with. Thus there could even be karma points and a red flag/green flag score.

The red flags might only be visible when you click in to their profile for example so as not to draw attention to their presumably highly offensive postings while the green flags are visible highlighting extraordinary comments.

PS I hope you figure out a solution to this as I'm sure I'm not the only one who was alarmed by your comment "I occasionally find myself thinking lately that I just don't want to be here anymore."


Instead of trying to solve the problem with more software, have you thought about trying a people approach?

I know there are multiple moderators around; maybe it would be worthwhile to say who they are - and then when someone sees a race to the right margin, they can step in and gently remind that the thread is getting out of hand. I think the problems we have here are that people care too much, I don't think that anyone intends to be nasty.

While some comment threads have gotten a little ugly; I still think this place is head and shoulders over just about everywhere for community - personally I'm thankful that you've put this place together, ugly threads and all.

It seems like HN is transitioning from a community to a culture, and that's always going to be painful. It's easy to notice the bad things going on, but there have been a ton of really interesting things going on as well, it's just not as easy to pick those out because while everyone knows when someone is being an asshole, not everyone thinks the same things are interesting.


I agree that it is definitely a nastiness problem, so if you were going to add one of these two types of flags, I'd recommend 'uncivil' over 'frivolous'

I think the right consequence for someone who accumulates too many uncivil flags is to have the server start to throttle their IP. This allows it to be a gradual, linear and temporary disincentive to poor behavior, rather than a binary response. Just dropping the connection is also de-escalating relative to alternatives like sending a message only to that user. I could certainly see people absolutely exploding when they see that they've been flagged as uncivil. Add a message on top of it, and things will definitely get out of hand at least some of the time. But if the connection starts to just get dropped, they maybe keep refreshing, maybe go somewhere else.


Democracy does not work. A random fluctuation in tone will be picked up by other users and the entire tone will move in that direction. Even the flagging would go the same way. You have to create a sort of elitist system (like wikipedia), where the flagging is controlled by the model citizens, and people who are not flagged for a while then become model citizens, further propagating the model citizen mode. This way, a large change in the demographic of the site would not change the demographics of the elite, and after a while, the elite would swing things back to go their way, as they have more power than the masses.


More moderation (sucks, I know :(). I've seen enough communities to know you cant really code round this sort of thing.

On the other hand I do think there are a minority of people using this more aggressive tone (or rather are causing it to spread). So that makes it a little easier to deal with - having a jerk flag is a good idea, then you could start to limit the site for those with a high jerk rating (i.e. longer cool off times for them, delays between their comments, when they submit a comment it asks them "are you sure you want to post this hint hint"


One thing I believe holds well is that nasty comments are often short. Smart assholes often go for clever comments that skirt the edge of useful and obnoxious.

So how about a karma-moderated length minimum. New commenters (less than ~300 karma, maybe) have to post several sentences which has some chance of disincentivizing quick, brutal snark.

Longer comments can still be nasty of course, but it's going to require a lot more involvement for the user while making them a bigger target for moderators.


Most nasty comments seem to go unpunished. If someone says something clearly nasty that doesn't add to the conversation, why don't you personally kill it?

Lots of communities have tried to push moderation into the hands of users via flagging, and it never seems to work. I think the best and quickest solution to the problem is to use trusted but invisible moderators.


One reason I don't kill mean comments is that I don't want to be censoring anyone. If someone says something super nasty I might ban them, so there at least won't be any more, but I rarely kill the comment I banned them for unless it is a spam or a very clear troll.

There's another more practical reason it's hard to kill mean comments: they often have replies by the time I see them. It would mess up the conversation if they got taken out then.


Consider having a penalty box for a certain period of time once a flag threshhold is reached - e.g. a few hours (where the offense is clearly marked that the user alone can see with a link to TOS) to shame someone into being more civil. If it's decorum that you're trying to change, then I'd keep the flag simple - e.g. an "ad hominem" flag. Part of the attraction for me to HN is its elegance and simplicity in great and relevant stories percolating to the top in addition to the unique and remarkable community.

Once an individual has a given number of penalty boxes, a moderator considers whether to ban them. The danger of doing stuff like this is that subjects that interest this community inevitably also spill over into the political realm and as others have pointed out the use of up votes and downvotes are ambiguous and further, open to abuse.

If you want to publicly shame someone, then once a member is on probation after a certain number of penalties/flags, add a dot or something relatively benign in color to their name.

Edit - I'd add that if it were up to me, I'd try to control personal attacks above all else given the community is pretty good at calling people out on comments that add little to no value.


By all means please leave the nasty comments in. If nothing else, they should serve as an object lesson to other posters.

With a huge crowd, the system needs to train the users by providing cues and publicly "punishing" (through greyed-out text and negative scores) ill behavior. The more people you have, the more important this feedback system becomes.


Except that nasty comments are sometimes voted up very highly, which doesn't help anyone.


One reason I don't kill mean comments is that I don't want to be censoring anyone.

Then you need to bring onto your team some group of curators/moderators whose number 1 task is to keep the site free of nastiness on a nonpartisan, equal opportunity basis.


I agree that this is possibly the best general solution, and perhaps even the only solution, but it is certainly not "quick" when there are thousands of comments and only one moderator.

The only way to keep human moderation workable is to keep the work of each individual moderator small and scale out tree-style. Perhaps that will be the ultimate shape of the Web: a set of top level news sites, aggregating the best commentary from a larger set of second-tier news sites, aggregating the best commentary from a larger set of tertiary news sites...


Hacker News has many moderators.


I am sure it does.


I like this idea of being able to hide comments and stories from submitters with accounts fewer than N days old, where N is a setting that each user can tweak.


This can omit important discussion, as I do remember a couple of items discussing either an article or a new site where the author specifically created an account so they could contribute to the discussion.

Account age has no bearing on the intelligence of the owner... there's a specific group of smart people (6 degrees of PG) that have been here a long time, and the rest have joined as it came up on their radar.


Do you think there is some size threshold beyond which a site cannot remain thoughtful?

If you do, maybe "fight" partly means actively limiting the growth of the site.


Maybe we can also get a nice flag for posts tediously and semi-hysterically grinding the same political axe over and over, and another for supposed discussions of science that, you know, don't actually contain any science.

Whatever. Flagged or not, one doubts the EchoChamber of Commerce News crew on what used to be "Hacker News" will do much more than upvote this kind of behavior.

After all, intellectual honesty is hardly as important as the bottom line.


I hadn't noticed the site getting nastier (and since everyone seems to agree that it has, I'm now regretting a recent nasty comment!)

Regarding flags on comments (not stories), I suspect it might have more of an impact to add just one new type rather than several at once. That would send a clear message about what we're all being asked to focus on. Along those lines, I'd say "uncivil" is more important than "frivolous". In my observation, over the last few months, HN users have gotten pretty good at using downvotes to regulate the merely frivolous.

If someone's comment has been flagged as uncivil it might be helpful to inform them of this with a call to action such as "Please be more polite" and a link to a fuller explanation of what they are being asked to do. I'm quite sure that many (though not all) people just don't realize how their comment might come across.

Perhaps comments flagged as uncivil could remain editable for a longer period. It's true that this would distort the history of conversations a bit, but maybe in a good way. If I find out that I made a comment that people found rude, I'd like the option (after my two editable hours have expired) of editing my comment for civility and having it marked as such. After all, if one is found "guilty" by a jury of one's peers, it's good to have a way to make amends in addition to the negative feedback. Besides, if people started noticing comments that had been "edited for civility", social proof might nudge their own behavior in that direction.


> Yeah, I've noticed a decline in the tone of comments too. The site seems to be getting nastier. I occasionally find myself thinking lately that I just don't want to be here anymore.

Could the community help monitor the general tone of the site? It seems like keeping things civil is a hard task for you and the top users.

Could there be an automatic survey sent out each week or month to a random sample set of users to measure their overall satisfaction with the site and tone? It could even be just one question ("How would you rate the discourse on HN lately? Highly Civil, Civil, or Uncivil").

That way there would be a good objective gauge to see trends in tone and to monitor the impact of any changes like flags.

As the site continues to grow it gets more challenging to preserve the level of discussion. Having some long term measure of that might be helpful and make it easier to know when a new feature needs to be implemented.


For the love of everything that is good, please either block techcrunch (very unlikely to happen) or put a hard cap on how many stories can be submitted within 24 hours from one domain (I suggest 2).

Sometimes reading HN feels like reading TC rss feed. If I want to read every single story from TC I would have subscribed to their RSS feed.


TC is extremely relevant to start-ups. Blocking it would be pretty ... silly.


I am not saying all of TC is irrelevant. We have our share of irrelevant stuff from other sites too. But on average we get 3-4 stories from TC posted here on HN, stuff like twitter going down (every freaking time, how often does it happen?), facebook going down for 5 minutes, rackspace going down (in itself is relevant news, but from TC the news is the internet went down because TC is hosted on rackspace).

If this much crap was posted from any other website, they would have been blocked without any debate whatsoever. There is a reason people subscribe to rss feed, why would I come to HN to see one blog hogging the whole site?

Which is why I suggest either a hard cap on number of subs per domain per day or give users the option to block out certain domain so that they don't see it in their HN.

I don't think it is an unreasonable or very difficult request to implement.


Sadly, while still relevant, Techcrunch is slowly becoming less intellectually satisfying. There's been more drama than needed as of late in my opinion.


This idea didn't work for Slashdot at all.


Sarcasm? :-) Slashdot recently turned 100k stories and its moderation system has remained largely unchanged over that time (with meta-moderation being the only key addition, I recall). The moderation system has a lot to its credit: http://meta.slashdot.org/story/09/12/11/1615202/Slashdot-Tur...


How so? I still admire Slashdot's moderation system. (unless there's a woosh sound over my head right now).

The thing that killed Slashdot for me was the redesign - I'm not usually a design snob, but it's way too visually heavy for me. I think the moderation system is great, though - particularly the ability to set weights for different types of comments (e.g. -1 all Funny posts if memes and one-liners bother you).


Yes, in the short term, personalizable comment weighting seems like it would improve the signal / noise ratio, but in the long term, it seems less productive as it does little to foster the community that it seems pg wants. Does being able to ignore huge swaths of comments dissuade trolls? How would slashdot have turned out if Funny defaulted to be a -3 score?


What went wrong for them?


I think what went wrong was that they publicly displayed the score for whichever axis was most significant. Here, the only score that is displayed to all is the only positive one we have: "this comment adds to the discussion."

The reason I think this is what went wrong for them was that they had flags like "funny", for things that obviously don't add to the discussion, and too many choices for the other positive votes. IMHO, having one meaning for upvote and a handful of negative flags would be ideal if the flags were only visible to the submitter (to educate noobs/reprimand those who should know better) and whatever process is responsible for threshing the chaff.


This is a good point. "Uncivil" is more valuable than "Offtopic" and "Troll".


The Uncivil tag should prove a fun exercise in reading tone and sarcasm from text. Someone needs to invent Tone Markup Language--and not just emoticons.


The "Troll" tag on Slashdot comes with some sense of acheivement. "Uncivil" seems much more refined, and less likely to award poor behaviour.


How about adding a free form tagging field to the flag process? Don't show all tags on a post, just a given subset (most common and not blacklisted).


Not much. I'm personally a huge fan of Slashdot's moderation system - one where there is both a minimum and maximum "karma" on a post, and the context of this rating is also recorded (e.g. +5 Insightful).

There's certainly room to improve on this model - I for one would like to see a "-1 Uncivil". Either way, where Reddit and Digg have repeatedly struggled with designing a sustainable commenting system, Slashdot's has stayed relatively constant for years, and the moderation quality has also been similarly consistent. This is not a trivial accomplishment.

IMHO the problem with "raw" upvotes and downvotes is that it is very difficult to circumvent the mob mentality, and that this system absolutely fails to scale with the population of the community. IMHO the Slashdot system works far better for large communities than what the competition seems to have going.


Slashdot used to be my goto location for tech news. If I go far enough back, I used to read and enjoy the comments.

The beginning of the decline was the growth or maybe how they dealt with the growth. Meta-moderation was the nail in the coffin. The tags were never fine grained enough, at least for me. I quickly found out that my taste and what I appreciated in a comment were not the same as the masses. Then I found reddit and HN :) Where I still feel at home and often enjoy the commentary as much as the stories.

To summarize it, after looking back. I think what killed it for me was the decrease in quality of stories and comments while there was an increase in stories overall.


These attempts at explaining what went wrong are likely all wrong themselves if the primary negative effect on on-line discussions is the size of the audience. The only thing we can really conclude is that it wasn't enough to turn the inevitable tide.


I somehow get the sense that the quality of the discussion has correlates with how long an article lives on the front page. A year or two ago, articles could hang on the front page for a day or two. Now, the volume of new articles pushes interesting discussions like this one off the front page within hours. I pretty rarely visit page 2.

I used to spend 30-35 minutes writing and editing a comment. It doesn't seem worth it any more when the article is going to drop off the front page in 8 - 12 hours. When I thought people might read it for a day or to, it felt like it was worth the investment.

It seems like the current speed of article churn has increased the pace of HN, and has inadvertently encouraged shorter, shoot from the hip type comments rather than encourage slower more thoughtful discussion.


One bug I did notice (slightly off topic but this seems an appropriate place) is that people can work around the cool off delay for replying by clicking the "link" button on a comment. This has given me a textbox and let me reply to the comment way before the cool off.


I would favor the addition of a rude/inappropriate/trollish vote that would be anonymous in addition to the current upvote/downvote. Perhaps those are three buttons that are downvotes in that category.

This would add a useful dimension to the feedback.

I find myself using the up/down arrow largely for "contributes to the discussion". There is an element of "I agree", but I frequently click up on a comment that I disagree with because it is well-reasoned or brings up a thoughtful angle that is new.

To avoid the mob clicking of these new buttons, they could be shown only to the author of the comment, or very high karma individuals.


I'm likely to be of the lowest caste here but I think a single flag works for me. I tried to think of others and the only two logical ones I could think of were "Spam" and "Duplicate" for posts. I support you could flag comments but again I see two options: "Spam" or "Offensive".

In any case - that is to say regardless of the reason it was flagged - I think it should be killed. Therefore one "flag" link and that's it.


How about allowing the primary post to have extra settings that affects what can/can't happen: Example: Min karma to post: 1000, Min Civility to post: -100 points.

This way if the community sees a mob problem, the community can tweak the settings accordingly. I'm sure you can come up a good algorithm to maintain a moving set of defaults.

Also, maybe allow readers to filter posts based upon their threshold settings.


I'm downmodding now because I think this is a bad contribution but rather because, since this isn't a poll per se, I think that upmodding/downmodding is sort of how we are voting. So don't take it the wrong way :)


No worries. I'm partial on it myself. I don't like the idea of excluding people, however I was trying to think of a way to have people care about the quality of their contributions. ie. give the community incentive to be nice :) It's certainly not that easy.


Maybe this is the place to ask why, relatively frequently, there are comments which seem perfectly reasonable and inoffensive, that are dead.


If it looks like noise, it deserves to go. In other words, if, after reading a comment, I feel that my time was wasted, I should downvote the comment.


typically, most accounts get banned for one or two really unpleasant comments. once an account is dead, every comment that person makes in the future will also be insta-deaded, whether it is civil or not.


Should down voting some one also negatively affect the down voter's karma? It will help restrict the number of down votes anyone will give..


Aren't many downvotes efforts to improve the community, and thus worthy of a karma INCREASE, if anything?


I would like to be able flag a story as not a primary source/not the best source and give an alternative. Not sure if this will help the dicussions, but I think that a discussion tends to only get as good relative to the story. Often times the most helpful comments are ones which give alternative sources, so it might be good to think about supporting that.


If public meta-commenting is the issue, and assuming the comment isn't an outright troll, it'd be nice to know who took issue with it, and be able to ask why privately. Hence, civil discourse as though we're face-to-face. Anonymous one-word stamps are no more sensitive than unexplained downvotes, and will give rise to the exact same questions.


Have you considered downvotes for submissions? Maybe with a Karma threshold, so that it'll only be used to get irrelevant submissions off the front page.


Flagging seems better for that, because if they're really offtopic they should be killed, not just have fewer points.


I guess the trade-off there is between a purely binary action (kill / don't kill) which depends on a moderator's opinion, versus a softer, more gradual action of slowly pushing a story down and letting more worthy stories rise to the top...


One possible problem with this is that you can downvote everyone except your story of course.


That's easily caught by a script... Also, if downvoting is limited to high-karma people, the chance that they would actually do this is fairly low.


Gamedev.net has one of the most civil and informative forums on the internet. At some point they had a pretty mean spirited community too, and the owners pulled the site into one of the best sites on the internet. I spent a lot of time on these forums, and I think there are three main things that contribute to very high quality.

- Non-threaded comments. I know this has been discussed to death, but I think the downside of threaded comments (fragmentation of the discussion and incentive for witty one-liners) far outweighs the benefit (having separate unrelated conversations). A non-threaded approach gives huge incentive to bring the community together into a single coherent conversation, gives the reader a great sense of a timeline (which gives further incentive to maintain an intelligent discussion), and discourages fragmentation of the discussion to a point where it isn't interesting to anyone anymore. I know this is a big change, but I would strongly encourage at least giving it a serious consideration - it can make a huge difference if done right.

- Active topics - on sites like Hacker News and Reddit it's very easy to lose track of interesting discussions that are going on because top threads are tightly coupled with the top articles. We can have interesting discussions long after the article has left the front page, and an active topics page would go far towards encouraging intelligent interaction. Basically, every time an article is commented on, this article goes to the top of the list. This way I can see all active discussions at glance (including the ones I participated in), and continue interesting discussions, even though they're not on the front page.

- User rating system (not article/comment system) - in real life, if I am usually a very interesting, intelligent person, and one day I suddenly act as a jerk, it can very seriously ruin my reputation. On gamedev people rate users, not comments and articles. So, I can give a user a rating - extremely helpful, helpful, not helpful, jerk. The karma is a combination of ratings for a given user, and the strength of a user's vote is proportional to his karma. So, if someone is a jerk one day, it can affect his karma far more significantly than simply downvoting his comment. In addition, jerks have less affect on the system overall. Rating people's comments never made sense to me - it always ends up signaling agreement or disagreement vs. usefulness. If people rate users instead of comments, this problem goes away always entirely, plus very strongly discourages aggressive behavior.

I think these three things would help immensely to scale the community. These are big changes, but I think they're worth seriously thinking about.


How about giving more people moderation access for a short time period, and then aggressively flagging/deleting a whole bunch of articles/comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: