• AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      I can run a small LLM on my 3060, but most of those models were originally trained on a cluster of a100s (maybe as few as 10, so more like one largish server than one datacenter)

      Bitnet came out recently and is looking like it will lower these requirements significantly (essentially training a model using ternary numbers instead of floats to reduce requirements, which turns out to not lower the quality that significantly)

    • OozingPositron@feddit.cl
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Basically Mistral, check /lmg/ in /g/, if you have a GPU newer than 2 years you can probably run a 32B quantised model.

    • Simon@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      7 months ago

      Haha try the entire datacenter.

      If LLM was practical on three servers everyone and their mum would have an AI assistant product.