If you’re setting up from scratch I recommend using open webui, you can install it with GPU support onto docker/podman in a single command then you can quickly add any of the ollama models through its UI.
Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.
In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a docker-compose.yml connecting the two. Is anyone really using ollama without a UI?
If you’re setting up from scratch I recommend using open webui, you can install it with GPU support onto docker/podman in a single command then you can quickly add any of the ollama models through its UI.
Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.
In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a
docker-compose.yml
connecting the two. Is anyone really using ollama without a UI?The link I posted has a command that sets them up together though, then you just go to https://localhost:3000/