AMD dropped ROCm support for Radeon VII (gfx906) at v6.2.4. I wanted to run local LLMs on it anyway.
Found a community-maintained image on r/LocalLLaMA that packages ROCm 7.1 with llama.cpp for gfx906. One docker pull later, I had llama.cpp + Ollama + Qdrant + Open WebUI running on "unsupported" hardware.
Docker Hub is a library of solved problems.
Full story: https://bit.ly/4pTk3zf
#Docker #DockerCaptain #LocalLLM #AMD #ROCm #OpenSource #SelfHosted #MachineLearning