I recently asked Chrome to show me how to apply the Knuth-Bendix completion procedure to propositional logic, and I had already formed my own thoughts about how to proceed (I'm building a rewrite system that does automated reasoning).
The response convinced me that I'm not a total idiot.
I'm not an academic and I'm often wrong about theory so the validation is really useful to me.
That’s a perfect example of LLMs providing epistemic scaffolding — not just giving you answers, but helping you check your footing as you explore unfamiliar territory. Especially valuable when you’re reasoning through something structurally complex like rewrite systems or proof strategies. Sometimes just seeing your internal model reflected back (or gently corrected) is enough to keep you moving.
I recently asked Chrome to show me how to apply the Knuth-Bendix completion procedure to propositional logic, and I had already formed my own thoughts about how to proceed (I'm building a rewrite system that does automated reasoning).
The response convinced me that I'm not a total idiot.
I'm not an academic and I'm often wrong about theory so the validation is really useful to me.