Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
simonw
9 months ago
|
parent
|
context
|
favorite
| on:
Running Qwen3 on your macbook, using MLX, to vibe ...
The one feature missing from LLM core for this right now is serving models over an HTTP OpenAI-compatible local server. There's a plugin you can try for that here though:
https://github.com/irthomasthomas/llm-model-gateway
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: