Unfortunately this is a good concept, but it's using gpt-3.5-turbo. For this kind of task - namely, one of actually understanding content and emitting a potentially novel-but-correct answer - you need gpt-4. But it's quite slow and you'll quickly run into rate limiting.
I ran into these issues when building this for my own company's docs, at least.
I ran into these issues when building this for my own company's docs, at least.