Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've hacked together a basic Emacs ollama api integration that does simplistic code completion against a local LLM from someone else's copilot example. It's slower than I want (about 7 seconds per inference on my M1 mac, typically) and very stupid about what context it sends, but nevertheless: it's just, and only just, enough to be useful. Hadn't considered publishing it because it relies on a python façade to convert copilot-style requests and responses back and forth to ollama, but if there's interest I'll spruce it up and get it out.


From downthread, just use ellama. They're further ahead than me by the looks of things.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: