I was experimenting with oobabooga trying to run this model but due to it’s size it wasn’t going to fit in ram, so i tried to quantize it using llama.cpp, and that worked, but due to the gguf format it was only running on the cpu. searching for ways to quantize the model while keeping it in safetensors returned nothing; so is there any way to do that?

I’m sorry if this is a stupid question, i still know almost nothing of this field

  • Mechanize
    link
    fedilink
    English
    arrow-up
    9
    ·
    16 hours ago

    I’ve never used oobabooga but if you use llama.cpp directly you can specify the number of layers that you want to run on the GPU with the -ngl flag, followed by the number.

    So, as an example, a command (on linux) from the directory you have the binary, to run its server would look something like: ./llama-server -m "/path/to/model.gguf" -ngl 10

    Another important flag that could interest you is -c for the context size.

    This will put 10 layers of the model on the GPU, the rest will be on RAM for the CPU.

    I would be surprised if you can’t just connect to the llama.cpp server or just set text-generation-webui to do the same with some setting.

    At worst you can consider using ollama, which is a llama.cpp wrapper.

    But probably you would want to invest the time to understand how to use llama.cpp directly and put a UI in front of it, Sillytavern is a good one for many usecases, OpenWebUI can be another but - in my experience - it tends to have more half baked features and the development jumps around a lot.

    As a more general answer, no, the safetensor format doesn’t directly support quantization, as far as I know

  • Smokeydope@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 hours ago

    You’ll want to look up how to offload GPU layers in ollama . a lower quant gguf should work great with offloading.

    Most people use kobold.cpp now. ollama and llama.cpp kind of fell behind. kobold.cpp is a bleeding edge fork of llama.cpp with all the latest and greatest features. its GPU offloading is so damn easy if you have nvidia card use cblast if you have AMD card use vulcan.

    Is there a particular reason youre trying to run a mixture of experts model for an RP/storytelling purposed llm? Usually MoE is better suited at logical reasoning and critical analysis of a complex problem. If you’re a newbie just starting out you may be better with a RP finetune training of a mistralAI LLM like alriamax based of NeMo 12B.

    Theres always a tradeoff with finetunes, typically a model thats finetuned for rp/storytelling sacrifices capabilities in other important areas like reasoning, encylcopedic knowledge, and mathematical/coding ability.

    Heres an example starting command for offloading, I have a nvidia 1070ti 8gb and can get 25-35 layers offloaded onto it depending on context size

    ./koboldcpp --model Mistral-Nemo-Instruct-2407-Q4_K_M.gguf --threads 6 --usecublas --gpulayers 28 --contextsize 8092
    
    
    • brokenlcdOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      I think i may try this way if kobold uses vulkan instead of rocm, It’s most likely going to be way less of a headache.

      As for the model, it’s what came out of a random search for a decent small model on reddit. No reason in particular, thanks for the suggestion.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    16 hours ago

    I believe exllama and vllm offer quantization. But llama.cpp should be able to run on a graphics card as well, maybe the default settings are wrong for your computer. Or you have like an AMD card and need a different build of llama.cpp?

    And by the way, you don’t need to quantize that model yourself. Some people already uploaded that in several quantized formats to Huggingface. AWQ, GGUF, exl2 …