• FloridaBoi [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    3 days ago

    Someone told me it’s like 97% more energy efficient or that it consumes 97% less energy. Is that true

    Edit: this comment has a summary saying that the model has 93% compression ratios so maybe that’s there efficiency number canes from

    • ragebutt@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      28
      ·
      3 days ago

      The exact figures aren’t documented but it’s a pretty massive decline in energy usage (though probably not 97%), enough so that stocks related to power consumption took a pretty notable hit

    • sewer_rat_420 [he/him, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      3 days ago

      It consumes less energy now, but also consumed less energy in its creation. This is directly reflected in the cost to the user - the API is 10-30x cheaper per token than openAI

      • That’s because LLMs aren’t supposed to be search engines. They are pretty good at summarizing documents in certain cases, but don’t have a big enough context window to effectively plow through massive troughs of data.

      • sewer_rat_420 [he/him, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        Without a profit motive and the need to commercialize it immediately, I hope deepseek continues to make efficiency gains will continue open source all their models. Right now it is 10x-30x cheaper per token, imagine if another generation could continue to reduce it by another order of magnitude? Project Stargate will just be a fancy bonfire to throw Arctic oil reserves at, while the rest of the world has access to state of the art LLM at a fraction of the price

    • Stolen_Stolen_Valor [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 days ago

      The “AI” is effectively just autocomplete powered by the internet. It could by powered by your 2001 flip phone probably. The whole thing is smoke and mirrors, hype, and snake oil bought by people who don’t understand what’s happening or people only concerned with line go up.

      • It could by powered by your 2001 flip phone probably

        LLMs are fundamentally billion-dimensional logistic regressions that require massive context windows and training sets. It is difficult to create a more computationally expensive system than an LLM for that reason. I have a fairly nice new laptop, and it can barely run Deepseek-r1:14b (14 billion parameter model. Not technically the same model as deepseek-r1:671b as it is a fine-tune of qwen-2.5:14b that uses the deepseek chain reasoning. It can run the 7b model fine, however. There isn’t a single piece of consumer-grade hardware capable of running the full 671b model.