Has the developer shared anything about the earlier conversation on policy violations? I just see https://mobile.twitter.com/jasonrohrer/status/14331194531186... which is the (not very interesting) step escalating from "you're violating policies" to "you're kicked off".
That may be true for art. But if art crosses territories with technology, then it needs to compromise.
Artists need to play by the same rules everyone does, their art must not harm people in the short or long term.
There's also no such thing as dangerous technology. There's no such thing as a dangerous idea. Every discovery, every article, every book, every tutorial is good. Knowledge helps us all.
If a company developed a new kind of gun which they rented access to, and an artist wanted to leave it unattended to see whether anyone misused it, it wouldn't be surprising if the gun company refused to continue providing access to the artist.
Now, we can argue about whether the particular thing that this artist wanted to do was actually unsafe, and whether Open AI has a reasonable policy on this, but do you agree that technologies can be dangerous?
No. Weapons can be dangerous. Technologies cannot be. Leaving a textbook around unattended is never dangerous. Did OpenAI shut down a weapon? No. They just shut down somebody's chatbot.
I don't think I understand the division you're drawing. Is it that information on how to do something cannot be dangerous but a system that does something can be? (But what Open AI shut down was the latter.) Is it that if something is dangerous we call it a weapon, while otherwise we call it a technology? (But then you're just playing no true Scotsman.)