Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only issue I have found with llama.cpp is trying to get it working with my amd GPU. Ollama almost works out of the box, in docker and directly on my Linux box.


>Only issue I have found with llama.cpp is trying to get it working with my amd GPU.

I had no problems with ROCm 6.x but couldn't get it to run with ROCm 7.x. I switched to Vulkan and the performance seems ok for my use cases




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: