Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you stick a torch.compile in the inference and training code, maybe gated behind a flag? This should help AMD/Nvidia performance (and probably other vendors soon) significantly.

PyTorch themselves used nanoGPT training as demo for this: https://pytorch.org/blog/accelerating-large-language-models/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: