No you can't, an LLM doesn't remember what it thought when it wrote what it did before, it just looks at the text and tries to come up with a plausible answer. LLM's doesn't have a persistent mental state, so there is nothing to interrogate.
Interrogating an LLM is like asking a person to explain another's persons reasoning or answer. Sure you will get something plausible sounding from that, but it probably wont be what the person who first wrote it was thinking.
This is not correct. You can get an LLM to improve reasoning through iteration and interrogation. By changing the content in its context window you can evolve a conversation quite nicely and get elaborated explanations, reversals of opinions, etc.
No you can't, an LLM doesn't remember what it thought when it wrote what it did before, it just looks at the text and tries to come up with a plausible answer. LLM's doesn't have a persistent mental state, so there is nothing to interrogate.
Interrogating an LLM is like asking a person to explain another's persons reasoning or answer. Sure you will get something plausible sounding from that, but it probably wont be what the person who first wrote it was thinking.