Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.

Salary aside (because I do doubt she earned $1M+/year, my guess is probably more on the ballpark of $300k~$500k and either way not really denting Google's finances), you are not wrong, but also it's worth understanding here we're entering the realm of the notion that companies can (and for many reasons should) be about more than maximizing shareholder value.

Also, if I'm being completely honest, from a PR perspective this could be worse than Timnit's paper might've been just given how public it has become and the people involved. People internally are perhaps more comfortable having that paper not be published and not having Timnit in their ranks, but as far as PR for Google goes this isn't great.



Yes, this is absolutely far worse than just letting the paper be published. AI ethics papers are not exactly the kind of material that gets a lot of conversation at the best of times, outside that world, but Google firing a black woman for speaking up is the kind of thing that definitely does get talked about (as we can see here).

But that aside, Google should want this kind of paper published. They absolutely should want to know and discuss every possible weakness in the ethics of their approach to AI - Google has a scale of influence so large that how they act in areas like AI, trickles down to many other organisations. To me, that gives them a responsibility to make it as ethical as is reasonably possible, and that will only happen if experts are allowed to speak freely.

One can make short-term arguments about how that hurts them, but the long-term damage of getting massive AI systems wrong, will be far, far worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: