Tiny changes in how you frame the same query can generate predictably different answers as the LLM tries to guess at your underlying expectations.