> There are not enough people in that job that are not white and male.
Have you looked at the actual demographics of medical doctors in the US? 54% are women, and 35% are nonwhite. But when we have media depictions of doctors, I agree they tend to be white and male.
So, what should DALL-E conform to? Should it conform to A) our actual present society, B) the biased original dataset (which leans both towards the past and towards existing media biases), or C) some idealized version of society?
I got 12 white dudes, one Southeast Asian woman, one Southeast Asian looking man, and two men that I'm not sure of their race when I tried this just now (quite possibly white). This is despite OpenAI's efforts to debias it, and isn't representative of current physician demographics.
But if AI just represents and reinforces extant biases-- and worse, AI is used to produce art and text that ends up in other AIs training sets -- how do we ever get out of this mess? The people who produce, publish, and productize AI do have some degree of editorial responsibility.
> But it turns out there is no "morally neutral".
Of course not. Hume pointed out long ago that you can't transform positive statements into normative ones.
But all of this is a little offtopic, anyways. This is about when it's reasonable to refuse to return a result. "Hey, your answer had the N-word in it, and we know most of the time our model does that it's offensive-- so we're just not going to return a result, sorry." I think this is a reasonable path to take when you know that your model has some behaviors that are socially questionable.
>54% are women, and 35% are nonwhite. But when we have media depictions of doctors, I agree they tend to be white and male
What's the issue? I used to watch a lot of medical dramas on TV and in my opinion the black rockstar MDs are way overrepresented in comparison to their real-life numbers:
'5.0%' in 2018 in the US[1] in real-life vs. '19.4%'[2] on TV
> What's the issue? I used to watch a lot of medical dramas on TV and in my opinion the black rockstar MDs are way overrepresented in comparison to their real-life numbers:
Well, this clearly isn't the case in the DALL-E training dataset, because "medical doctor" overwhelmingly yields white dudes-- even after OpenAI's effort at removing bias.
Have you looked at the actual demographics of medical doctors in the US? 54% are women, and 35% are nonwhite. But when we have media depictions of doctors, I agree they tend to be white and male.
So, what should DALL-E conform to? Should it conform to A) our actual present society, B) the biased original dataset (which leans both towards the past and towards existing media biases), or C) some idealized version of society?
I got 12 white dudes, one Southeast Asian woman, one Southeast Asian looking man, and two men that I'm not sure of their race when I tried this just now (quite possibly white). This is despite OpenAI's efforts to debias it, and isn't representative of current physician demographics.
But if AI just represents and reinforces extant biases-- and worse, AI is used to produce art and text that ends up in other AIs training sets -- how do we ever get out of this mess? The people who produce, publish, and productize AI do have some degree of editorial responsibility.
> But it turns out there is no "morally neutral".
Of course not. Hume pointed out long ago that you can't transform positive statements into normative ones.
But all of this is a little offtopic, anyways. This is about when it's reasonable to refuse to return a result. "Hey, your answer had the N-word in it, and we know most of the time our model does that it's offensive-- so we're just not going to return a result, sorry." I think this is a reasonable path to take when you know that your model has some behaviors that are socially questionable.