Hacker Newsnew | past | comments | ask | show | jobs | submit | CalmDream's commentslogin

Most of the stuff you pointed out is addressed in a series of blog posts by Lattner : https://www.modular.com/democratizing-ai-compute


Many of those posts are opinionated and even provably wrong. The very first one about Deepseek's "recent breakthrough" was never proven or replicated in practice. He's drawing premature conclusions, ones that especially look silly now that we know Deepseek evaded US sanctions to import Nvidia Blackwell chips.

I can't claim to know more about GPU compilers than Lattner - but in this specific instance, I think Mojo fucked itself and is at the mercy of hardware vendors that don't care about it. CUDA, by comparison, is having zero expense spared in it's development at every layer of the stack. There is no comparison with Mojo, the project is doomed if they intend any real comparison with CUDA.


what is provably wrong ?


I think this might be good introduction to GPU programming: https://builds.modular.com/puzzles/introduction.html. It explains gpu concepts in an hardware agnostic way and verify understanding with implementation puzzle. It is based on https://github.com/srush/GPU-Puzzles but is CUDA specific.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: