• BakedCatboy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I just discovered how easy ollama and open webui are to set up so I’ve been using llama3 locally too, it was like 20 lines in docker compose, and although I’ve been using gpt3.5 on and off for a long time I’m much more comfortable using models run locally so I’ve been playing with it a lot more. It’s also cool being able to easily switch models at any point during a conversation. I have like 15 models downloaded, mostly 7b and a few 13b models and they all run fast enough on CPU and generate slightly slower than reading speed and only take ~15-30 seconds to start spitting out a response.

      Next I want to set up a vscode plugin so I can use my own locally run codegen models from within vscode.

    • Larry@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I tried llamas when they were initially released, and it seems like training took garbage amounts of GPU. Did that change?

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Look into quantised models (like gguf format) these significantly reduce the amout of memory needed and speed up computation time at the expense of some quality. If you have 16GB of rm or more you can run decent models locally without any gpu, though your speed will be more like 1 word a second than chatgpt speeds