I think the best result of this case is that the societal impact of AI is weighed fairly. Both in its potential for good and therefore the need to not overly stifle genuine progress, but also in its potential for replacing jobs for real people—with silicon owned by the hyper wealthy—and therefore the need to set the correct precedent for attribution of works and some form of like royalty. Most of the AI models of late are in someway trained on the collective works of mankind so it seems like, as to prevent the wealthy running off with the whole planet, legislation needs to be defined to make a portion of AI profit get returned to the common man who doesn’t have the aptitude or dumb luck to run a successful unicorn.
This is a corporation fighting another corporation, it isn't going to give journalists more money no matter what happens with this case. The impact towards the citizen or worker hasn't even been weighted.
One landmark precedent opens the door to another and so on. We’re clearly not in a “fast takeoff” scenario, so mankind has time to strike a deal. We need to honestly weigh the dynamics of a world where climate change + societal collapse + advent of AI + power hierarchy = let’s just let billions eat shit while the ultra wealthy survive the apocalypse thanks to leverage we handed them without blinking or thinking.