Hacker Newsnew | past | comments | ask | show | jobs | submit | badconvincer's commentslogin

Pfizer’s phase 1 was a 100% white according to a commentor above.


Nonsense. Targeted censorship has the exact effect of taking the steam out of nonsensical conspiracy theories because you stop the spread of incorrect ideas.


The root problem is how to decide what idea is incorrect. Usually this is decided by a public debate, where multiple people express their opinions, and the issue is concluded after reading and being exposed to the different perspectives. The problem with social media platforms censorship is that it is decided very quickly by a small group of people, without understanding the full scope of perspectives on the matter.

This is why:

> you stop the spread of incorrect ideas

is a dangerous method - you do stop the spread, but who says you do it right?


I'd argue that it's rarely been decided by a public debate. It's traditionally been decided by (for example) the small number of companies who are in the business of publishing textbooks. Or the small number of people who teach law at prestigious universities. Or the small number of people who decide which textbooks will be bought in Texas, particularly since Texas is a large enough market so that this decision affects other states as well. Or religious figures, laying down a decision about what doctrine can be taught in churches.

Obviously in the past these small groups of people have made the wrong decision, and sometimes that's had very bad consequences. I am not arguing that this is a good methodology. I am arguing that we can't assume that public debate will generate a better set of decisions.


At some point, even though all of this is subjective, it’s common sense and we can go back and forth on this endlessly but it’s wholly apparent that false conspiracies that threaten the stability of the best society the world has seen should not be encouraged. So most of us, and the people in power make that decision especially given that the company is private and they can do what they want.


But biased police judgement doesn’t...


Yes. The police have had a long history of chalking up death in impoverished black neighborhoods to “gang violence” and closing investigations. Topically, Chapelle had a famous stand up bit about it.


Fights were more common, bullying was rampant but people seemed lighter and happier like nothing was pent up. Probably just rose colored glasses though.


It’s not enough to exist. You actually have to perform at least comparably, which DDG doesn’t unfortunately.


I use DDG exclusively, and also run Firefox on my Android phone and my desktop. No chrome, no google search anywhere for me.


I’m sorry you were downvoted.

I’ve tried to replace Google with both DDG and Bing. Both of their results are frankly terrible compared to Google’s. Google’s transition to its current hybrid Ask-Jeeves-style question/traditional query search engine, along with its inline answers above the result list both blow its competitors out of the water. Google also does a much better job of letting the user know the freshness or up-to-dateness of results with its inline last-modified/updated time stamps.

Both of those things have made it impossible for me to switch off of Google without immediately having problems quickly finding relevant, up-to-date information on the web. I do desperately want to switch, though.

I also especially despise AMP. It constantly breaks websites and itself, especially on iOS.


Yeah I've tried DDG reliably for nearly a year. A good 30% of the time I end up having to run my query on Google anyway.

I will literally pay money for a search engine that's privacy focused and can get closer to Google in terms of performance.


Personally I need to switch from ddg to Google maybe 0.1% of my searches, but at least for you you've managed to avoid Google 70% of the time, and it's not like adding a !g to do so takes more a milisecond.


For me, it is 70% I have to use Google.

Even for not-that-obscure bands, all DDG needs to do is return the wikipedia page and it fails to do that.


Right, but running 30% of your searches a second time, after looking through unhelpful results on the first attempt, takes more than a few ms.


I agree, and I'm still very thankful for DDG. Would gladly pay to have someone take me even further.


A good 30% of the time I end up having to run my query on Google anyway.

A 70% reduction in Google sounds like a win.


Absolutely it is.


I reckon I'd pay for that too, actually sounds really, really awesome (assuming the performance is there)


How long ago did you try DDG? Years ago I had a similar experience to you, and switched back, but now DDG is much better for me, and I only try !g < 5% of the time, often with no luck there either.


DDG is fine for many types of queries. It also doesn't have many of (what I consider) the drawbacks of google's search: hostile ad positioning, AMP, autoscraped meta crap. Pretending you can't compare them is ridiculous.


90% of my search queries, particularly the ones I make from my phone, are really dumb.


I was an Adobe Flex developer years ago. There still isn’t a similar replacement right? Last I checked when I was still developing you had to do insane things with CSS for even the most basic animations. Is there something better now?


I hate this. I feel like segments of the population that dislike porn will use this as a wedge to demonize the entire experience over time. There is nothing wrong with taking down illegal content as it’s reported especially if the parent company is highly receptive to doing so which pornhub almost certainly is.


The idea that they were highly receptive to extensive moderation is questionable at best.

But one major problem is they allowed anyone to upload AND download. So userA would upload child abuse imagery. userB would download it. PornHub would delete video uploaded by userA. userB would reupload with different video name. userC would download the reupload.

It's playing whack-a-mole with illegal child abuse imagery. Interestingly PornHub only really started caring when payment processors started looking into things. But better late than never.


>PornHub would delete video uploaded by userA. userB would reupload with different video name

Allowing (re-)upload of prohibited or previously removed content is a fatal flaw, and I find it hard to believe they've been allowing it for so long without either being staggeringly incompetent as an organisation, or wilfully turning a blind eye in the name of profits.

There are various lists of hashes for Child Sexual Abuse Materials which I'm sure they'd have access to, and they could license something like Microsoft PhotoDNA [1] (or implement something similar themselves, it's not like they're lacking the tech talent) which is able to detect image alterations that break simple checksum comparisons and operate on video content.

They don't need to play whack-a-mole, for the most part this should be a solved problem. Obviously if it's new content that hasn't been fingerprinted before you still need manual reporting and moderation, but they could and should be scanning against known CSAM on upload and quarantining it / shadow banning the user until it can be evaluated by a human and passed off to law enforcement.

It's hard to do this at scale given how much content they ingest every day, but they need to bite the bullet and invest some cash and engineering time - what they are doing now simply isn't good enough. Hopefully having the spotlight put on them will force them to do the right thing.

[1] https://en.wikipedia.org/wiki/PhotoDNA


> It's playing whack-a-mole with illegal child abuse imagery.

As opposed to the alternatives? Have a moderator sitting in a chair behind every single person using a computer? Outlaw storing/transmitting any user generated data, and turn the internet into Cable TV 2.0?

Playing whack-a-mole with law breakers is a fact of life in a non-totalitarian society.


As a practical matter, how is this different from copyright enforcement?

Simply disabling downloads is not enough, they would need either an effective DRM system, or to kill off third party downloaders such as youtube-dl. I'm sure the RIAA would love to be able to go after youtube-dl on child porn charges instead of just cppyright charges.

They could attempt to implement a content-ID system to detect re-uploads. I suspect this could work better for child porn than copyright, because you will not have adversarial parties trying to claim that you uploaded their child porn.

The difficulty in such an approach is that the mere possession of child porn is a crime, which makes developing and maintaining such a technology far more difficult and expensive (even with the law enforcement support that would be necessary to make such development legal).

The more insideous effect is that putting this requirement on pornhub also puts requirements on every platform supporting user generated material. In order to avoid getting caught up in the child-porn minefield it creates, platforms would need to censor all porn and porn-adjecent content.


> As a practical matter, how is this different from copyright enforcement?

Child porn has victims and harms society


I think the OP meant, how is it different from an enforcement/mitigation standpoint.


I think GP meant, that and the offense are different because of the real harm done.


Not more than some totalitarian surveillance law.


Yeah whack a mole is the way to go. The alternative is total surveillance which is completely unacceptable no matter how bad the crime.


I'd recommend the NYT article that led to this recent pushback: https://www.nytimes.com/2020/12/04/opinion/sunday/pornhub-ra... It's not at all anti-porn, and addresses what you're referring to.

This is just my own two cents independent of the article, but I think for any platform that allows anyone to upload anything, a blacklist system is inevitably doomed to fail once the platform is beyond a certain size. The only feasible option in that case is to whitelist, which is what Pornhub seems to now be doing going forward.


That op-ed is anti-porn though. The only source is Exodus Cry, an evangelical group trying to shut down the entire sex industry.

The author briefly mentions the Internet Watch Foundation's objective stats and then dismisses it when they say they don't know why it's so low compared to Facebook and Twitter. Maybe because it just is?

Why are Mastercard and Visa not "investigating" Facebook and other sites for allow child abuse uploads in the millions?


Let's try to stay at the higher levels of Graham's hierarchy pyramid.

What's the specific stat or claim you disagree with? If you read the article, you can see both the author and several interview subjects explicitly clarify severe times they're perfectly fine with porn, as long as it's of age and consensual.

What's a source on Facebook having millions of child pornography videos? If they did, I'm sure NYT would report on them and Mastercard and Visa would investigate them, too.


The author of the op-ed mentions the Internet Watch Foundations's stat of 118 csam cases over 3 years on PornHub, and then says he asked them why it's so low dimisses their objective stats and says they couldn't explain. Maybe it's because it IS low?

The times did an actual investigation into the csam on Facebook and other big teach and it's a massive problem. https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...

No Visa and Mastercard weren't forced in a moral panic to investigate.


>Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material

This suggests the majority of it was exchanged privately, which means it mostly may have been automatically detected by systems matching content hashes. On Pornhub, it's all available to the public rather than in private messages, which makes the problem a lot more evident and visible, and arguably more damaging (e.g. if you were raped as a child, you potentially may find it more damaging if one million people see the video compared to ten people).

If Facebook hadn't taken appropriate measures, Visa and Mastercard certainly should have investigated them if they were in a position to.

Another issue here is Pornhub is directly profiting off of the CSAM and non-consensual porn by plastering ads all over the page displaying the content and encouraging premium account registrations. Facebook isn't directly monetizing that content.

Of course I know Pornhub isn't doing this deliberately, but 1) they're not financially incentivized to take things down too aggressively (due to loss of ad revenue and premium registrations), 2) even if they were incentivized to do it, the problem is too big and too evasive to tackle with just blacklisting, manual reporting, and a small team of moderators. The only workable solution at this point is a whitelist, which sensibly seems to be the approach they're now taking.


From Kristof's article:

>Its site is infested with rape videos.

Not illegal in the US. In fact, there is plenty of consensual porn made to look like women are being assaulted or coaxed into sex.

> came across many videos on Pornhub that were recordings of assaults on unconscious women and girls. The rapists would open the eyelids of the victims and touch their eyeballs to show that they were nonresponsive.

If I come upon an adult being assaulted, including sexually, and recorded the incident, this is perfectly legal in the US. That said, there is consensual porn that falls into this category, made by professional outfits.

>It monetizes child rapes, revenge pornography, spy cam videos of women showering, racist and misogynist content, and footage of women being asphyxiated in plastic bags.

It monetizes all content, even illegal content which hasn't been reviewed or reported. This is how it works for all sites that allow user generated content, including YouTube.

Racist and misogynist content isn't illegal in the US.

There are plenty of BDSM videos including footage of women being asphyxiated in plastic bags. You can find men and women with hot wax being poured on their genitalia or needles piercing their genitalia. You can find golden showers and scat too. None of that content is illegal in the US.

>Unlike YouTube, Pornhub allows these videos to be downloaded directly from its website.

Really? We all know this isn't true because of youtube-dl. Restricting downloads will not work for the determined. If the browser can see it, it can be downloaded.

>A search for “girls under18” (no space) or “14yo” leads in each case to more than 100,000 videos. Most aren’t of children being assaulted, but too many are.

By the author's own admission, most of the results were not true. Of those that have illegal content, report it. There is only so much a site can do to weed out illegal user generated content. PornHub doesn’t encourage illegal content or turns a blind eye to it. The company uses many of the automated tools that Microsoft and Google developed for fingerprinting pictures and videos of child pornography.

>Depictions of child abuse also appear on mainstream sites like Twitter, Reddit and Facebook.

You can find videos of adults beating children with all manner of household items. You can find videos of vehicles running over people with their twisted, mangled bodies laying in the street. You can find killings of rival gangs in Mexico and Brazil, or state executions in Iran and Iraq. None of this stuff is illegal in the US.

It would be nice if the author focused on the illegal content, rather than the stuff he finds distasteful.


>Not illegal in the US.

Federally, no. It appears to be illegal in 46 states, though. And I personally do think (actual) rape should be illegal to film, distribute, or profit off of. (I'm a major detractor of any kind of censorship, but I believe the three exceptions should be [actual] child pornography, [actual] rape pornography, and the sort of porn where humans or non-human animals are [actually] tortured/killed.)

>In fact, there is plenty of consensual porn made to look like women are being assaulted or coaxed into sex.

Of course, but that's completely different, and is indirectly addressed in the article:

>To be clear, most aren’t of 13-year-olds, but the fact that they’re promoted with that language seems to reflect an effort to attract pedophiles.

>The issue is not pornography but rape. Let’s agree that promoting assaults on children or on anyone without consent is unconscionable. The problem with Bill Cosby or Harvey Weinstein or Jeffrey Epstein was not the sex but the lack of consent — and so it is with Pornhub.

This is also part of the core problem. If some content is reported, and it's not clear to a moderator if it's consensual or non-consensual, it may not get removed even it turns out it actually was non-consensual. Same with porn of minors: if it's not unambiguous that an actual minor in a video is below 18, Pornhub often wouldn't remove the content.

---

>It monetizes all content, even illegal content which hasn't been reviewed or reported. This is how it works for all sites that allow user generated content, including YouTube.

I agree it wasn't appropriate of the reporter to sandwich "racist and misogynist content" between those other far more horrible things. I don't necessarily endorse 100% of the article; just the core point.

YouTube manages to largely avoid this problem by doing something closer to a whitelist approach - if content doesn't fall into the category of "non-pornographic", it's semi-automatically blocked and removed.

>There are plenty of BDSM videos including footage of women being asphyxiated in plastic bags. You can find men and women with hot wax being poured on their genitalia or needles piercing their genitalia. You can find golden showers and scat too. None of that content is illegal in the US.

It is indeed very important to distinguish between things that some may find extreme or disgusting and things that are unethical and should be or are illegal. The key issue is consent vs. non-consent. If a woman has a BDSM fetish and is filmed being asphyxiated, that's completely different from a group of people attacking an unsuspecting woman and asphyxiating her.

The article should've gone to greater lengths to separate this when writing sentences like that, but it also makes itself very clear in other parts that the issue is the non-consensual acts, and specifically the fact that Pornhub is directly profiting off of them and not effective at detecting or removing the vast majority of it.

>There is only so much a site can do to weed out illegal user generated content.

Exactly. As you say, any large platform that allows arbitrary uploads won't be able to be effective at this - hence why a whitelist approach is required here instead of a blacklist, which is sensibly now what they're going to be doing.

>You can find videos of adults beating children with all manner of household items. You can find videos of vehicles running over people with their twisted, mangled bodies laying in the street. You can find killings of rival gangs in Mexico and Brazil, or state executions in Iran and Iraq. None of this stuff is illegal in the US.

Of course, but how is any of that relevant to the sentence you're quoting? Recordings of gore and violence are legal. Child pornography isn't legal. It's a completely different situation if Twitter, Reddit, and Facebook are swarming with gore vs. swarming with child pornography.

>It would be nice if the author focused on the illegal content, rather than the stuff he finds distasteful.

I agree, and it is important to be very precise and explicit when advocating for censorship, so I understand the need for semantics and picking apart the reporter's fuzzy condemnations.

But I don't understand being seemingly dismissive of the core issue discussed in the article - massive amounts of real non-consensual porn and child pornography being directly profited off of while also being ineffectively combatted (and being largely infeasible to properly combat), growing every day, accessible to the world.

It's quite plausible they made and are making tens of millions of dollars directly from that content. This doesn't necessarily mean they encouraged it or were turning a blind eye to it, but I think they were kind of glancing away from it until a big enough fire was lit under their ass by NYT and payment processors. They had a large, financial, perverse incentive (pun intended) to not dedicate a massive amount of resources to the problem until now. And now it appears they are dedicating those resources by implementing this whitelist approach, which is painstaking but is the only way to do it safely.

You're absolutely right to critique the article in the areas where it lacks rigor or slides a bit further down a slippery slope, but there's still a big elephant in the room here regarding the actual situation, independent of the article's faults.

Let's separate the core problem away from the greater cloud of general moral panic. There's a kind of "fallacy fallacy" here (https://en.wikipedia.org/wiki/Argument_from_fallacy); the existence of a moral panic doesn't necessarily mean there isn't a real, harmful problem that originally sparked it. They can contain kernels of reality that need to be addressed - e.g. the "vapes killing people" moral panic containing the actual truth that many bootleg THC vapes contain harmful filler compounds.


I don’t know. I’m feeling the need for large gatekeepers especially in the media space. I have lost faith in the public’s ability to make reasoned decisions for themselves.


I feel like this approach addresses a symptom but not the root problem. How can people regain a shared truth and stop becoming so polarized while still prioritizing freedom of expression?


I feel like a gatekeeper provided experience will have a moderating influence that will bring people back to a shared narrative over time. Possibly a very long time.


>disgusting

I’ll be the judge of that. Recipe please :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: