It could by powered by your 2001 flip phone probably
LLMs are fundamentally billion-dimensional logistic regressions that require massive context windows and training sets. It is difficult to create a more computationally expensive system than an LLM for that reason. I have a fairly nice new laptop, and it can barely run Deepseek-r1:14b (14 billion parameter model. Not technically the same model as deepseek-r1:671b as it is a fine-tune of qwen-2.5:14b that uses the deepseek chain reasoning. It can run the 7b model fine, however. There isn’t a single piece of consumer-grade hardware capable of running the full 671b model.
LLMs are fundamentally billion-dimensional logistic regressions that require massive context windows and training sets. It is difficult to create a more computationally expensive system than an LLM for that reason. I have a fairly nice new laptop, and it can barely run Deepseek-r1:14b (14 billion parameter model. Not technically the same model as deepseek-r1:671b as it is a fine-tune of qwen-2.5:14b that uses the deepseek chain reasoning. It can run the 7b model fine, however. There isn’t a single piece of consumer-grade hardware capable of running the full 671b model.