Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your response is exactly what I had in mind when I referred to people who are "skeptical of politics and trust the big brains at OpenAI more".

You aren't wrong that government regulation is not a great solution, but I believe it is - like democracy, and for the same reasons - the worst solution, except for all the others.

I don't disagree that using a non-profit to enforce self-regulation was "worth a shot", but I thought it was very unlikely to succeed at that goal, and indeed has been failing to succeed at that goal for a very long time. But I'm not mad at them for trying.

(I do think too many people used this as an excuse to argue against any government oversight by saying, "we don't need that, we have a self-regulating non-profit structure!", I think mostly cynically.)

> But it at least gives a chance that a potentially dangerous technology will go in the right direction.

I know you wrote this comment a full five hours ago and stuff has been moving quickly, but I think this needs to be in the past tense. It appears to be clear now that something approaching >90% of the OpenAI staff did not believe in this mission, and thus it was never going to work.

If you care about this, I think you need to be thinking about what else to pursue to give us that chance. I personally think government regulation is the only plausible option to pursue here, but I won't begrudge folks who want to keep trying more novel ideas.

(And FWIW, I don't personally share the humanity-destroying concerns people have; but I think regulation is almost always appropriate for big new technologies to some degree, and that this is no exception.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: