Yep, I fell for it this week. Spent an hour fixing typos and minor bugs in their code before taking a step back and realising most of it was flawed.
What I believe they're doing is feeding papers to a LLM as soon as they come out in order to get a repo they can advertise. Once someone releases a working implementation they just copy it over.
I was able to generate almost identical code to what they released by giving chatgpt pseudocode copied verbatim from the original paper.
What I believe they're doing is feeding papers to a LLM as soon as they come out in order to get a repo they can advertise. Once someone releases a working implementation they just copy it over.
I was able to generate almost identical code to what they released by giving chatgpt pseudocode copied verbatim from the original paper.