Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, that’s why I said waiting a year after Llama v1 was good. By that point llama.cpp, LM Studio and Ollama were all pretty well established and a lot of low-hanging fruit around performance and memory mapping stuff was picked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: