I've hacked together a basic Emacs ollama api integration that does simplistic code completion against a local LLM from someone else's copilot example. It's slower than I want (about 7 seconds per inference on my M1 mac, typically) and very stupid about what context it sends, but nevertheless: it's just, and only just, enough to be useful. Hadn't considered publishing it because it relies on a python façade to convert copilot-style requests and responses back and forth to ollama, but if there's interest I'll spruce it up and get it out.