It's a weird circle with these things. If you _can't_ do the task you are using the LLM for, you probably shouldn't.
But if you can do the task well enough to at least recognize likely-to-be-correct output, then you can get a lot done in less time than you would do it without their assistance.
Is that worth the second order effects we're seeing? I'm not convinced, but it's definitely changed the way we do work.
But if you can do the task well enough to at least recognize likely-to-be-correct output, then you can get a lot done in less time than you would do it without their assistance.
Is that worth the second order effects we're seeing? I'm not convinced, but it's definitely changed the way we do work.