- cross-posted to:
- eticadigitale
- cross-posted to:
- eticadigitale
Social media platforms need a lot of computing and storage power provided by energy-hungry data centres that constantly have to upgrade their hardware, spitting out vast amounts of e-waste. This is particularly true of commercial platforms with their ML-driven ad systems. The fall of Twitter and Reddit would be beneficial in that regard.
But what about Fediverse systems? The link discusses Mastodon, but that’s only one example. Would it be possible to host Lemmy instances in a sustainable way? With solar power? And what would it imply, materially and socially?
I have resources like the Low-Tech Magazine in mind, which uses solar power to host a website. The downtime is part of the adventure. Or we’d have to deploy a solar protocol to use the earth’s rotation creatively and for cooperation.
I was recently at a conference for AWS (Amazon Web Services, AKA the cloud provider for a HUGE chunk of the internet), and part of the keynote claimed that it was greener to run in the cloud because… uh… well, they didn’t exactly say. Don’t get me wrong, I could see how it would be easier to make all AWS data centers compliant with using green energy than it would be to convince every random financial institution that their on-premises servers need to be green, but quite frankly it’s Amazon and I don’t trust that they’re telling the truth about themselves and not just greenwashing.
Quite frankly, for things like lemmy instances, I think we could totally achieve a totally solar powered setup easily… but not easily at scale or reliably.
I’ve thought about how cool it would be to have a server room linked up with a solar array and batteries, and basically only have the servers up when there’s enough energy to power them. In theory, it sounds fun to have a static splash page that shows when the servers are down that explains why they’re down, as a way to make people think about how energy-expensive servers are. In practice, it sounds like a nightmare for a ton of reasons to have an intentionally flaky server. But it sounds like this is already a Thing with Low-tech magazine, which is neat!
But that’s not to say we couldn’t build and self-host a reliable and sustainable server room. Just that I don’t know the numbers on what a server room actually pulls energy wise and how much energy generation we’d need.
What is often omitted is that large centralized data-centers need a lot of cooling. Due to efficiency improvements this has somewhat improved lately, but it used to be up to 60% of the total electricity used.
Smaller decentralized servers don’t need nearly as much of it as they can easily dissipate heat to their cooler surrounding even if they use older less efficient equipment.
Thus up-cycling older server hardware in decentralized locations can save a lot of energy if you consider the entire life cycle of the equipment.
I agree with this. Efficiency vs cooling the infrastructure and updating hardware after a maximum of 5 years. Still, I’m not 100% sure about statistics. Do you know of any comparative studies or the like?
Just one fitting side note. We had an interview with a local data centre manager and during the discussion, we somehow started talking about alternative setups, like a raspberry pi server. The interviewee reminded us of the efficiency of their virtual servers. He even gave us a tour through their digital dashboards and showcased the 1 watt used by a server (vs roughly 4 watts of a Pi, with much less performance).
This is not to say that low-tech is not the way to go. Less mining and hazardous work conditions are always good and need no measurement for emphasis.
Also omitted - the amount of speculative buying for planned capacity that never actually happens. I worked for one of the big tech companies for several years, and more specialized hardware especially (ML accelerators) were spun up with the notion of “we don’t know who will need these, but we don’t want to not have them if they’re needed”. Cue massive amounts of expensive hardware sitting plugged in and idle for months as dev teams scramble to adopt their stuff to new hardware that has just enough difference in behavior and requirements as to make it hard to migrate over.
Also also, there’s a bunch of “when in doubt, throw it out” - automated systems detecting hardware failure that automatically decommission it after a couple strikes. False positive signals were common, so a lot gets thrown out despite being perfectly fine.
There are actually quite a few projects to use waste heat from data centers to heat homes. With aquifere storage that can even be used in a seasonale fashion.
Did I read that you host this on renewable energy?
Not fully. Well, I am pretty sure most of the power comes from the nearby geothermal plant and the three windmills up the hill, but overall the grid power here is still around 50% fossile.
I started building a solar PV system for it though, but ran into some issues with the batteries that I am still trying to solve without having to buy expensive new ones.
Ok so I’m not fully crazy haha I did read something I just didn’t fully remember.
@okasen @stefanlaser you are right to be skeptical about AWS https://www.fastcompany.com/90879223/amazon-claims-to-champion-clean-energy-so-why-did-it-just-help-kill-an-emissions-bill-in-oregon.
FWIW, I think you are pointing to a larger problem-- like, it’s not a coincidence that the harder the sales pitch of the cloud, the more obscure such numbers become.
To take a car analogy–there’s a reason most Americans have an intuition for what miles per gallon *feels* like, but wouldn’t know where to start with the equivalent for EVs.