The Picard Maneuver@piefed.world to Lemmy Shitpost@lemmy.worldEnglish · 13 hours agoSeems legitmedia.piefed.worldimagemessage-square41linkfedilinkarrow-up1433arrow-down16
arrow-up1427arrow-down1imageSeems legitmedia.piefed.worldThe Picard Maneuver@piefed.world to Lemmy Shitpost@lemmy.worldEnglish · 13 hours agomessage-square41linkfedilink
minus-squareDarkCloud@lemmy.worldlinkfedilinkarrow-up49·13 hours agoYou can get offline versions of LLMs.
minus-squaresp3ctr4l@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up4·9 hours agoI’ve been toying with Qwen3. On my steam deck. 8 bil param model runs stably. Its’s opensource too! Alpaca is a neat little flatpak that containerizes everything and makes running local models so easy that I can literally do it without a mouse or keyboard.
minus-squarecriss_cross@lemmy.worldlinkfedilinkarrow-up10·11 hours agoAnd gpt-oss is an offline version of chatgpt
minus-squareGhostalmedia@lemmy.worldlinkfedilinkEnglisharrow-up2·9 hours agoI mean, most people have a local LLM in their pocket right now.
minus-squareutopianfiat@lemmy.worldlinkfedilinkEnglisharrow-up8·12 hours agoIndeed https://huggingface.co/openai-community
minus-squarelinkinkampf19 🖤🩶🤍💜🇺🇦@lemmy.worldlinkfedilinkEnglisharrow-up6·12 hours agoFirst thing that came to mind: GPT4All
You can get offline versions of LLMs.
I’ve been toying with Qwen3.
On my steam deck.
8 bil param model runs stably.
Its’s opensource too!
Alpaca is a neat little flatpak that containerizes everything and makes running local models so easy that I can literally do it without a mouse or keyboard.
https://ollama.org/
And gpt-oss is an offline version of chatgpt
I mean, most people have a local LLM in their pocket right now.
Indeed https://huggingface.co/openai-community
First thing that came to mind: GPT4All