Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see what else you want. In 2014 it was revealed that Facebook did run experiments to see how much of an impact they can have on people's sentiments. Just check the study, it's public: https://www.pnas.org/content/111/24/8788.full.

The researchers found out that Facebook's news does influence sentiments in a contagious manner. Which implies it does result in behaviors change on the platform, otherwise they don't have a way to measure the effect in the first place...

And don't forget to check the Acknowledgement section:

> We thank the Facebook News Feed team, especially Daniel Schafer, for encouragement and support; the Facebook Core Data Science team, especially Cameron Marlow, Moira Burke, and Eytan Bakshy; plus Michael Macy and Mathew Aldridge for their feedback.

It's not like they hide it, that's exactly why advertisers and political parties partner with Facebook in the first place.



It's not that I "want" anything else. It's that Facebook's product is ads, even if they also ran experiments to change sentiment. If I want to make a million people sad, I can't buy that from Facebook (directly). Conversely, if I want to show a million people an ad for my widget, I can absolutely directly buy that from Facebook.


The PNAS paper showed very small effect sizes, to be clear. But, yes.


Yes, that's true. At the same time the argument of "The Social Dilemma" is that at Facebook scale you just need a very small effect to influence the crowd in a way that matters. Paraphrasing the documentary (from memory): tuning human behaviours 1% in the direction you want, worldwide, is what Facebook sells and what their customers pay for.


Isn't that the goal of just about all human behavior, to influence other humans behavior? Is the issue with Facebook that they are too effective at it, or is it that most marketing & advertising behavior is unethical?


I don’t understand this hyper relativism.

I personally don’t see my goal, the goal of projects I work on, the goal of companies I work for, or anything else I contribute to, to be about manipulating people behaviors without their knowledge and/or consent.

Facebook has been doing completely unethical experiments since forever (are you aware of their role in Rohingya’s genocide in Myanmar?). They have been open about a lot of them, bragging how good they are at manipulating crowds.

And yes, they are crazy effective. And they have the scale. They have unethical behaviors, that are effective, applied at humanity scale.


My understanding of the Myanmar incident is that FB didn't swiftly block material used to inflame already existing ethnic tensions? From what I know, the fault there is they allowed communication in language that their AI and human reviewers couldn't understand.

Was there more to it than that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: