I haven't seen a massive correlation in LLM popularity and reddit bots. A good old markov chain can simulate the average reddit thread, and the botting issue has been prevalent for quite a long time.
Bots used to just take other popular comments and repost them either in whole copies of threads, as in the example here, or taking a top comment in a new thread and reposting it elsewhere in the same thread. Now they're using LLMs to rephrase comments to try to avoid detection (though they often come across sounding a bit off so they're sometimes easy to spot).
Are LLMs not just fancy Markov chains?
They are next token predictors which have some hidden internal state that output probability distributions which lead to further states.