suoko to AI@lemmy.ml · 1 year ago👾 LM Studio - Discover and run local LLMs - Linux beta version now available 🐧lmstudio.aiexternal-linkmessage-square6fedilinkarrow-up110arrow-down14file-textcross-posted to: hackernews@lemmy.smeargle.fans
arrow-up16arrow-down1external-link👾 LM Studio - Discover and run local LLMs - Linux beta version now available 🐧lmstudio.aisuoko to AI@lemmy.ml · 1 year agomessage-square6fedilinkfile-textcross-posted to: hackernews@lemmy.smeargle.fans
minus-squareylai@lemmy.mllinkfedilinkarrow-up1·1 year agoThe question is not support. It is clear that LM Studio has nothing custom in term of actual inference code. The curiosity is that they are still stuck at LLaMa and Falcon, etc., giving Mistral’s performance.
The question is not support. It is clear that LM Studio has nothing custom in term of actual inference code. The curiosity is that they are still stuck at LLaMa and Falcon, etc., giving Mistral’s performance.