I guess the value in reporting it is that for most people, for us on HN as well, computing is considered accurate. You can trust the output if you trust the input and the program that processes the input. That is what we expect and value in computing - accuracy.
For LLMs that's not really the case anymore and it needs to be highlighted that "computers" no longer necessarily produce accurate output, to make sure not too much faith is put in what they produce.
> "computers" no longer necessarily produce accurate output
This was always the case. Just because a computer executes your model, doesn't mean your model has any bearing on reality. This is not a new phenomenon.
The story isn't about LLMs doing LLM stuff. It's about lawyers using LLMs as a shortcut for proper legal work, laboring under the delusion that it is entirely accurate, honest and 'intelligent', and the ramifications for the legal system.