The exact figures aren’t documented but it’s a pretty massive decline in energy usage (though probably not 97%), enough so that stocks related to power consumption took a pretty notable hit
It consumes less energy now, but also consumed less energy in its creation. This is directly reflected in the cost to the user - the API is 10-30x cheaper per token than openAI
That’s because LLMs aren’t supposed to be search engines. They are pretty good at summarizing documents in certain cases, but don’t have a big enough context window to effectively plow through massive troughs of data.
Without a profit motive and the need to commercialize it immediately, I hope deepseek continues to make efficiency gains will continue open source all their models. Right now it is 10x-30x cheaper per token, imagine if another generation could continue to reduce it by another order of magnitude? Project Stargate will just be a fancy bonfire to throw Arctic oil reserves at, while the rest of the world has access to state of the art LLM at a fraction of the price
The “AI” is effectively just autocomplete powered by the internet. It could by powered by your 2001 flip phone probably. The whole thing is smoke and mirrors, hype, and snake oil bought by people who don’t understand what’s happening or people only concerned with line go up.
It could by powered by your 2001 flip phone probably
LLMs are fundamentally billion-dimensional logistic regressions that require massive context windows and training sets. It is difficult to create a more computationally expensive system than an LLM for that reason. I have a fairly nice new laptop, and it can barely run Deepseek-r1:14b (14 billion parameter model. Not technically the same model as deepseek-r1:671b as it is a fine-tune of qwen-2.5:14b that uses the deepseek chain reasoning. It can run the 7b model fine, however. There isn’t a single piece of consumer-grade hardware capable of running the full 671b model.
Someone told me it’s like 97% more energy efficient or that it consumes 97% less energy. Is that true
Edit: this comment has a summary saying that the model has 93% compression ratios so maybe that’s there efficiency number canes from
so you are saying ai is destroying the planet for nothing?!
Literally the same business model as Bitcoin.
Funny how it also caused a massive demand spike for gpus
why’s that funny? it’s because of what GPUs do… namely massively parallel computations
We also got news from Trump of a huge tariff to all chips manufactured outside the US, so that may be panic buying as well.
ah! i hadn’t heard of that
Not for nothing - for the investment portfolio of energy investors. You know, the highest priority of all.
The exact figures aren’t documented but it’s a pretty massive decline in energy usage (though probably not 97%), enough so that stocks related to power consumption took a pretty notable hit
It consumes less energy now, but also consumed less energy in its creation. This is directly reflected in the cost to the user - the API is 10-30x cheaper per token than openAI
Isn’t 97% more efficient still really bad compared to like, a search engine
That’s because LLMs aren’t supposed to be search engines. They are pretty good at summarizing documents in certain cases, but don’t have a big enough context window to effectively plow through massive troughs of data.
deleted by creator
Without a profit motive and the need to commercialize it immediately, I hope deepseek continues to make efficiency gains will continue open source all their models. Right now it is 10x-30x cheaper per token, imagine if another generation could continue to reduce it by another order of magnitude? Project Stargate will just be a fancy bonfire to throw Arctic oil reserves at, while the rest of the world has access to state of the art LLM at a fraction of the price
The “AI” is effectively just autocomplete powered by the internet. It could by powered by your 2001 flip phone probably. The whole thing is smoke and mirrors, hype, and snake oil bought by people who don’t understand what’s happening or people only concerned with line go up.
LLMs are fundamentally billion-dimensional logistic regressions that require massive context windows and training sets. It is difficult to create a more computationally expensive system than an LLM for that reason. I have a fairly nice new laptop, and it can barely run Deepseek-r1:14b (14 billion parameter model. Not technically the same model as deepseek-r1:671b as it is a fine-tune of qwen-2.5:14b that uses the deepseek chain reasoning. It can run the 7b model fine, however. There isn’t a single piece of consumer-grade hardware capable of running the full 671b model.
deleted by creator