So this implies that instead of spending resources on training bigger and bigger LLMs, AI practitioners need to shift focus to developing “ontological” and “epistemological” control loops to run on top of the LLM. I suspect they already have rudimentary such control loops. In a sense, the “easier” part of AI may be a largely “solved” problem, leaving the development of “consciousness” to be solved, which is obviously the hard part.
When I studied NLP, Language Models were only one part of a chatbot system used to handle language input and output. The "internal" reasoning would be handled by a knowledgeable representation systems. I guess that's the closest part to a true general AI.
The first order predicate logic we studied had alot of limitations in fully expressing real knowledge, and developing better models delves deep into the foundations of logic and mathematics. I would imagine this is a problem that has less to do with funding than requiring literal geniuses to solve. And that goes back into the pitfalls of the AI winters.