> Nevertheless, the composition of audiences can still tilt toward demographic groups such as men or younger workers, according to a study published today by researchers at Northeastern University and Upturn, a nonprofit group that focuses on digital inequities
>One reason for the persistent bias is that Facebook’s modified algorithm appears to rely on proxy characteristics that correlate with age or gender, said Alan Mislove, a Northeastern University professor of computer science and one of the study’s co-authors.
Hypothetically, let's say that the trucking company in the article used "people interested in cars" as a targeted group. It would come as no surprise to me if this group was > 80% male.
It may even be by another mechanism- facebook's algorithm may look at the profiles of individuals that clicked through on ads in order to determine who to show the ads to in the future. This is a good way to provide cost effective advertisement. This also may be done in a way such that a small fraction of these ads are still shown to $membersOfProtectedClassX, even in cases where said class is statistically unlikely to click on the ad. What small fraction is necessary to be legally unproblematic?
If Joe Schmoe creates a facebook ad to hire for a bricklaying job (a job which is 98% male), what percentage of those ads must be served to women to be legally compliant?
That's a straw man. The situation of concern is failing to advertise high value or high status products and opportunities to people who would be disadvantaged. So to stay with your analogy, if you have a job to hire a React developer (a job which is right around 90% male), but you don't show it to someone because their browsing history includes Pinterest but not Reddit, then you're discriminating. And that's a problem worth worrying about and trying to address. Likewise ads for vacation timeshares that go to Taylor Swift fans but not to Diddy afficionados.
Yes, it's possible that there may be some collateral damage in the bricklaying recruiting industry. And, sure, maybe that's something that needs some regulatory relief. But mostly I think you're just looking for an example here.
This is beyond stupid. What they're saying is that even if you target people who write "I love programming" on their fb page, it's still discriminatory towards women because men are more likely to write those words.
The clear solution is for Facebook to find the people in the protected demographics and have them force joined into groups until they receive ads at a proper rate. If that isn't sufficient, Facebook can make posts on their behalf until they match with enough ads to bring everything back to parity.
Exactly, if I were going to put an ad for a babysitter, probably looking to get a highschooler who has some free time and needs some spending money, I'm not hoping the car mechanic who got laid off and is looking to pay rent gets to see that ad.
Likewise if I were to want help in taxidermying a cat, I'm not wanting the butcher or the baxter seeing the ad as well. I only want the taxidermist.
> What they're saying is that even if you target people who write "I love programming" on their fb page, it's still discriminatory
That is... not what they're saying. In fact the article doesn't claim to know the targetting mechanism at all (though it turns out that they found a way for some advertisements to circumvent the new restrictions on gender and race targetting). They're just claiming to have found ads that empirically are targetting demographics in ways that Facebook has already agreed not to.
The "I love programming" bit seems to be something you've invented.
"Dolese’s ad, for example, could have reached a predominantly male audience because it featured a man, or because an interest in trucking acts as a proxy for maleness, or both. (A Dolese spokeswoman said the ad targeted categories “that would appeal to someone in this line of work.”) The settlement did not resolve the potential bias from proxies and ad content, but said Facebook would study the issue."
The "I love programming" bit is an example of the kind of proxy they're talking about. Something correlated with gender but not gender itself. The FB algorithm uses thousands of data points that might correlate in different ways. Saying that any of those variables that correlates with gender should be forbidden is crazy.
> Saying that any of those variables that correlates with gender should be forbidden is crazy.
Once again, they are not saying that. You have applied a maximalist interpretation to the article that simply isn't present in the text.
The point of the article is that the end result is still discriminatory, something that Facebook had promised to fix. And they didn't fix it, and that's newsworthy.
Your point seems to be that solving the problem is really hard, so we shouldn't try to solve it, nor talk about whether or not it's being solved by parties who have promised to try to solve it?
Yeah but that's a self reinforcing feedback loop. Only showing programming ads to people that write "I love programming" is like only showing religious ads to people that write "I love God". The advertisements do not help outgroups break in and in fact may push them farther away.
To which the response is... So? Progressive social engineering is not the responsibility of advertisers or of Facebook. If I want to hire someone to do artwork for a Dungeons and Dragons campaign book I'm not going to place an ad in Guns 'N Ammo, despite how underrepresented that demographic is in the tabletop roleplaying industry.
This is silly. By definition, if you're advertising on Facebook, you're selecting for a certain demographic. It's a bit broader than the AARP but by picking any medium you are discriminating against the people who don't follow that medium.
It's also weird to say you shouldn't be able to target certain demographics. An ad that resonates well with seniors might resonate poorly with millennials. You might want to attract both! So you run multiple ads. Or you target youngsters on Instagram and oldsters in the NYT.
This is not something Facebook is in any position to police.
I was curious about the source of Facebook's liability. It turns out their liability stems from the Fair Housing Act (FHA), 42 U.S. Code § 3604(c) (https://www.law.cornell.edu/uscode/text/42/3604) which has been repeatedly interpreted over the years to apply to publishers directly.
Facebook has a duty to screen housing advertisements for discriminatory indications or intent. The question of whether this sort of disparate impact discrimination meets the criteria for 3604(c) is a different matter. (My uninformed guess is that it does not, but Facebook is being attacked on all sides so will probably be judicious regarding when and how hard it pushes back.)
I think the easiest solution would be to disallow ads of those categories on their platform. I'd think the risk of "facebook/instagram is racist" damaging their brand and the cost of federal discrimination lawsuits would outweigh whatever revenue they project.
As an aside, I know it's faux pas to bring up any observed (and/or presumed) differences between the protected classes - but maybe (just maybe) Facebook's targeting is smart enough to correlate "most likely to care" about things that tend to have skewed demographics without looking at the demographic data itself. Like the example in of truck driver ads targeting men, what is Facebook using to determine who they target? And do those data points line up with demographics?
I don't know, but these kinds of systems are tough to introspect from the outside.
Your aside is pretty much dead-on the big ethical issue with bias in ML right now.
For example, ML can do quite a good job of predicting recidivism rates in convicts, and justice systems have been using this to aid in sentencing and parole hearings. Obviously, these ML approaches are not supposed to consider ethnicity. So the factor that ends up having the greatest weight is "did your father / grandfather spend time in prison", which is an extremely effective proxy for "are you not white".
Basically, when your training data is based on a reality already heavily influenced by bias, your models will end up reflecting and perpetuating that bias.
The real problem is that there is an actual racial disparity in recidivism rates, so an algorithm that makes accurate predictions will predict the racial disparity that actually exists. There is no way to solve that without significantly impairing the accuracy of the predictions -- which is to say releasing convicts who we know have an unreasonably high probability of recidivism merely because there were too many other convicts with an unreasonably high probability of recidivism who were the same race.
You can also imagine what happens if you apply this recidivism "adjustment" to gender, which causes a lot of the people advocating it in the case of race to become nervous and defensive.
Especially when you consider fairness to the community at large. Is it fair to black neighborhoods if we send proportionally more expected recidivist drug dealers and rapists back into their communities than we do to white communities?
most of the standard metrics of fairness for machine learning don't just just try to equalize proportions of positive/negative labels. they look at error rates.
under these measures of fairness, a perfectly accurate predictor is regarded as perfectly fair, regardless of a disparity in base rates in the two populations.
some of the predictive policing models still fail under these metrics -- they are more prone to make errors on black defendants.
> under these measures of fairness, a perfectly accurate predictor is regarded as perfectly fair, regardless of a disparity in base rates in the two populations.
Unless your predictor is perfectly accurate, the errors will be proportional to the base rate. If you're predicting that more X will do Y then you have more chances to be wrong.
Improving accuracy is the only real way to reduce the error rate. If you can't do that then you're left with malicious nonsense like fudging the base rate, which is just trading false positives for false negatives and not actually making anything better.
That "smart"ness you describe is almost definitely what the algorithm's doing. And this is all well and good, until you take into consideration the fact that these systems tend to amplify existing biases.
If it spots a correlation, it'll amplify the correlation, regardless of whether it's actually meaningful. There are probably some originally spurious correlations that Facebook has amplified into existence, given how big and all-encompassing it is. It's the same problems that lead to racist AI judges.
I suspect it’s going to become a massive shakedown racket over the next decade; groups will go to tech companies and allege their algorithms are racist/sexist/whatever, and keep complaining until paid to go away.
There was targeted advertising in the 90’s. A lot of it uses direct mail, and profiled consumers based upon their magazine subscriptions. But other things like grocery store loyalty cards and the like were used back then, too. Television, radio, and print advertising was and is targeted by demographic.
Literally the entire purpose of the magazine industry is targeted advertising. The only reason to publish something like Cosmo or Hustler is to sell adds targeting the types of people who read Cosmo or Hustler.
Advertising has always been discriminatory. Even in the 90s there were multiple billboards, newspapers, magazines, TV and radio stations in the world. Advertisers select what, when and where to advertise based on who they think is listening.
The only thing that has changed is how much we know about the "listener" (or page viewer). The nature of advertising is no different today than it was 100 years ago, only the resolution changed.
I don't see how you could even have ads that aren't discriminatory if "proxy characteristics that correlate with age or gender" is enough to be discriminatory? If you put an ad next to a barbershop, then that ad is targeting men through "proxy characteristics that correlate with gender", because it's next to a service that men are much more likely to visit. Is that illegal discrimination too? Are there officials that somehow calculate the locations where you are allowed to put advertising so that it wouldn't discriminate (close enough to shops that both men and women visit)?
That's why, among other things, Facebook made "a tool so you can search for and view all current housing ads in the US targeted to different places across the country, regardless of whether the ads are shown to you." https://about.fb.com/news/2019/03/protecting-against-discrim... So advertisers can target you, but they can't keep you from seeing an ad if you go looking for it.
If this tool exists and allows ads to be viewed with the same visibility as their normal targeting then why are they bothering with targeted ads anymore?
I believe the idea is that while the tool gives you an interface to find those ads, it doesn't replace the ads that you see during usage of the rest of Facebook.
So you're preferring the form of manipulation that works best on you?
I'd prefer not to be manipulated at all.
In fact I loathe the feeling of being manipulated so much that it extends to all but the least obnoxious brands that try to advertise to me.
I can't be the only one who is allergic to advertising that tries to appropriate things from my social groups or background to sell me their shitty mobile data plan, insurance, or phone.
The most recent example is some random phone company trying to sell me their data plan with some shitty pun on "cum laude" on posters at university. Ugh.
Yes. Rather than being peppered with ads for credit repair services, obscure/irrelevant drugs, or other things that I’m not interested in and therefore waste 100% of my time, I’d rather be marketed to about products/services I’d consider buying (or ads that will entertain me).
You can whitelist what they can target, I mean if you really care about marketers, you can allow site owners to generate 'tags' ad networks can use for that specific page and the content of the tags can depend on the prefernces of the user on that specific site.
You can't understand because you're not reading between the lines. The surface message is self-contradicting, but the underlying is clear-cut.
They want a world where advertisers are not allowed to direct messages at their outgroup or the things their outgroup like. You will be allowed to sell perfume to women by putting a model on your billboard, but the same model can't sell ferraris to young men. Shaving razors shall follow only the Gillette model, no glorification of masculinity. Want to advertise your engineering position on FB? Better prominently advertise your diversity programme and paid maternity leave and nothing else. Definitely don't mention beer night.
this is a pretty paranoid view. the restrictions only apply to very specific categories where there have been laws against discrimination for many decades. perfume and razors are not among them.
Specific categories such as employment and housing, for example? Urgh, fine, pretend my examples were about bachelor pads. Same deal: don't target the young men who actually buy them, because we don't like those people and we don't want any part of the economy to cater for them.
The people pushing aren't doing it because they're against any types of tenants either. They want all types of tenants to be able to get housing and take advantage of things that make it easier.
For stuff like "employment, housing and credit", it's illegal to discriminate in advertising. That's why they made a whole new portal to try to avoid discrimination for those kinds of ads. So after a while if you gather data and realize that your algorithm is sexist and ageist, then continuing to use that algorithm to place ads is knowingly using sexist and ageist techniques in advertising. I think the EEOC has the authority to determine how non-discriminating FB will have to be to avoid trouble.
Those regulatory concerns should fall onto the purchasers list of responsibilities. The EEOC has the authority to censure the parties misusing their advertising options.
For example as a marketer who does work in all of the above industries , technology should not limit my use of my dollar.
If i have 10,000 for an apartment campaign, i should be able to spend 5k on ads showcasing my apartment gym focused on men and 5k on the view of the gym full of men targeted to women.( tongue in cheek example)
If the actual top level strategy is not discriminatory but rather is segmented to allow better messaging, any bullshit tech blocks shouldn't get in the way. Let the regulators do their jobs the old fashioned way.
You are right though, there are always ways to define the necessary audience using the the type of qualifiers you describe.
Facebook has been insisting that non-discrimination should be the responsibility of the purchasers, but we've shown over [0] and over [1] again [2] that even when the advertiser targets all groups proportionally (no misuse of advertising options), Facebook subselects who to show their ads to in a skewed way, leaving the advertiser and the users no recourse.
I just read half of the last paper (and skimmed the other half). It is surprising that the paper does not use the term "information theory" even once. The research basically bumps into the established facts that properly hiding or destroying information is really hard.
Advertising industry has known this since ancient times. Military signals intelligence has been essentially built with that tidbit as its core. Even weak proxies are formidable if you have bucketloads of them to choose and combine from.
Now, let's make one thing clear: I am not a FB apologist. In fact, I find the modern advertising systems abhorrent, immoral and outright vile. But even then, this article felt like the authors chose to miss the point. Hiding information is incredibly difficult - and conflating "information theory is damn hard" with "FB[ß] are evil and/or immoral" feels intellectually dishonest.
For what it's worth, I would actually love to see research _and_ well sourced articles about the practical net effects of information theoretical attacks, intentional or not, on the human populations as observed through the various e-stalking platforms.
ß: I'm using "FB" here as a shortcut for FANG+MS+others.
>Those regulatory concerns should fall onto the purchasers list of responsibilities.
Firearm/drug/financial regulations have already ruined that approach for you sadly.
It has been deemed time and again that the easiest place for regulators to apply pressure is on those producing the thing to be regulated. It is much easier to stop (undesirable thing) when the means of making (undesirable thing) happen are either tightly controlled or otherwise forbidden.
The smaller numbers of major players created by the necessarily high capital investment is much easier to control and surveil than having to keep every Tom, Dick, and Harry honest.
It's weird. I've been getting so disheartened as I branch out and diversify into various regulated fields of activity just to realize how much we have to hamstring ourselves to keep everyone honest; or at least to have a snowball's chance at figuring out what happened when something went wrong in order to ensure it won't happen again.
Half of me wants to chew my shoe over the degree of difficulty and implicit frustration encountered trying to get anything remotely compliant off the ground. The other half of just kind of bows it's head at the fact that the rule ended up coming into existence to solve one problem or another.
Cognitive dissonance and I seem to be full-time roommates nowadays, and it's rather exhausting. All part of growing up I suppose.
In my opinion, incorrect bias can happen only if either the input or process is bad. So, if you consider the output undesirable then there is a problem with input or process.
Ideally, ads would not be targeted at all in any way and micro targeting should be a crime (for privacy reasons as well). Short if that, you target people because of their proven individual preferences and tendencies as opposed to using their membership of any larger group of any kind as an input variable. Regardless of how much accuracy it enables, associative generalization is the core problem here, even without protected classes like race for example, I don't think it should be legal for any ad content to target me because I am a member of HN,now if by my explicit consent the ad system discovers I like specific topics which are also common on HN then they can use those individual level inputs WITHOUT associating it with my HN membership.
Essentially, the enemy here is formation of social bubbles as a result of explicitly clustering individuals into easily targtable micro groups. In other words, it is easier to sell ads that target specific micro groups but not so much for specific individuals.
What a silly argument, the magazine is not the advertiser. It's fine for content to target groups but not ads! You can have a men's magazine and the ads on there can target men, you can't have a men's magazine that sends out prints with ads targeting specific subsets of men. Content targeting is fine, user targeting is not.
What part doesn't make sense. Sends out different prints containing targeted ads to specific subscribers as opposed to the advertiser targeting the content. Are you arguing to prove some point?
No, as in specific subscribers will have specific ad pages. Like 'Joe' from montana will have ads tailored to him based on his demogeaphics and other data points collected on him. This doesn't happen with magazines but I was saying the equivalent of targeted ads is that
I did not say anything about the content of articles, my comment was entirely about ads. If the article contains some topic and the ad targets that topic, the ad is not creating any new bubble. If the ad targets the user specifically because the user belongs to some group then ads are influencing specific groups in specific ways and forming new bubbles advantafeous to the highest bidder.
An ad for something men would like on a men's magazine or website targets men because they are already visiting a men specific content. An ad about the same thing on a random news site because the user is believed to be a man influences the user and other similar users to be influenced in the same way, even when they're not interacting with content specific to their membership of some group. This seems harmless but in reality the groups targeted are much more specific. So you may have those ads targeting "white men between 25-30,with college degree and living in an affuent neighborhood" , so in reality you now have that specific micro group being influenced differently, forming new bubbles. Men who don't also fit that criteria are not targeted, and are not influenced to buy nice undies. Replace undies with other more nefarious things (e.g.: food, medication,housing,job opportunities,etc...) And you can appreciate how ads are essentially micro-segregating people into bubbles based on statistical presumptions. Now if a usee visits a content,ads relevant to that content make a lot of sense. The problem is the user being targeted when they don't interact with relevant content and users interacting with relevant content not receiving relevant ads (e.g.: spouse reading about a gift for her husband won't see the nice undie ads).
Ads affect much more than what we buy, they affect out associations,peferences and views on subject matters. Non-consensual surveillance (stalking) should not be used to influence very specific groups of people. Now, I don't get why you have a problem with that?
The perfect ad is one that gives the consumer exactly what they want from the supplier. Ads solve the asymmetric information problem on imperfect markets, which require information to be efficient.
>One reason for the persistent bias is that Facebook’s modified algorithm appears to rely on proxy characteristics that correlate with age or gender, said Alan Mislove, a Northeastern University professor of computer science and one of the study’s co-authors.
Hypothetically, let's say that the trucking company in the article used "people interested in cars" as a targeted group. It would come as no surprise to me if this group was > 80% male.
It may even be by another mechanism- facebook's algorithm may look at the profiles of individuals that clicked through on ads in order to determine who to show the ads to in the future. This is a good way to provide cost effective advertisement. This also may be done in a way such that a small fraction of these ads are still shown to $membersOfProtectedClassX, even in cases where said class is statistically unlikely to click on the ad. What small fraction is necessary to be legally unproblematic?
If Joe Schmoe creates a facebook ad to hire for a bricklaying job (a job which is 98% male), what percentage of those ads must be served to women to be legally compliant?