I was experimenting with oobabooga trying to run this model but due to it’s size it wasn’t going to fit in ram, so i tried to quantize it using llama.cpp, and that worked, but due to the gguf format it was only running on the cpu. searching for ways to quantize the model while keeping it in safetensors returned nothing; so is there any way to do that?
I’m sorry if this is a stupid question, i still know almost nothing of this field
I think i may try this way if kobold uses vulkan instead of rocm, It’s most likely going to be way less of a headache.
As for the model, it’s what came out of a random search for a decent small model on reddit. No reason in particular, thanks for the suggestion.