Different compiler versions, target architectures, or optimization levels can generate substantially different assembly from the same high-level program. Determinism is thus very scoped, not absolute.
Also almost every software has know unknowns in terms of dependencies that gets permanently updated. No one can read all of its code. Hence, in real life if you compile on different systems (works on my machine) or again but after some time has passed (updates to compiler, os libs, packages) you will get a different checksum for your build with unchanged high level code that you have written. So in theory given perfect conditions you are right, but in practice it is not the case.
There are established benchmarks for code generation (such as HumanEval, MBPP, and CodeXGLUE). On these, LLMs demonstrate that given the same prompt, the vast majority of completions are consistent and pass unit tests. For many tasks, the same prompt will produce a passing solution over 99% of the time.
I would say yes there is a gap in determinism, but it's not as huge as one might think and it's getting closer as time progresses.
That's not what's happening. Using your analogy, you're reading the book, remembering it, and then making N more books (maybe for commercial purposes) by using what you remembered to create something very similar in terms of style, prose, plot, and so on. As a result, the person who you learned from can't get a job writing anymore because their work has been commoditized. Also, they're feeling lost because the work they devoted their lives to is now awash in a sea of similar work.
Just a thought: avoiding political bias could it's own form of bias. I would think the goal is mostly to get "correct" answers based on facts, logic, and so on. If a "correct" answer can be computed (what does that mean with an LLM?), then it might be biased, but still desirable.
This sounds like category error to me. LLMs output string completions. Humans should judge the set of outputs to be representative of what humans would write. All humans, not biased to the subset of liberal humans.
The bias is thinking the liberal ideas are correct ideas.
If others are having this issue, I deleted all forms of local storage (cookies, IndexDB, etc.) for the mail.google.com site, and that seems to have fixed the issue. It might be relevant that this happened after I upgraded Firefox.