It’s some computing device (technically a smart toaster could do it) that shares the binaries over the network to other machines. Normally stuff is compiled for the lower common denominator when it comes to CPU architecture and supported features.
I have it as a VM, some people do it on bare metal. I’m trying to to have multiple CPU architectures supported by cheating a bit with BTRFS snapshots at the moment; time will tell if it works out.
Never got into btrfs I see the value in it like something crashes or goes down you can go back to that snapshot and everything comes down but I just never really had issues. I distro hop also so i don’t know when I hope its spontaneous. Maybe one of these days I will get back to Arch and play with it
The ability to come back is awesome, although I have never had a reason to use it.
For a distro hopper like yourself it would actually make like so much easier! Because of how subvolumes work - you can have every distro in a separate subvolume. They can share the home subvolume if you like, or not. You can have upgrades with a failsafe of sorts for the likes of Ubuntu, which, in my limited personal experience, have never ever been without issues.
Having a server subvolume to run portage in and then snapshotting it to a desktop one, applying desktop config saves some time on recompiling the big friends like gcc and llvm.
I did not understand the point of BTRFS at first as well, especially since it was slower than ext4. But since having started using it I’ve found that there are now scenarios that were not possible before or were incredibly complicated. Like read-only root, incremental backups over the network (yes, rsync exists, but this feels cleaner)
It can get tedious on a single machine. Once you have enough for a binhost to start making sense… Now we’re talking 🤣
I kept hearing about a binhost. is that where you have it in a VM or something?
It’s some computing device (technically a smart toaster could do it) that shares the binaries over the network to other machines. Normally stuff is compiled for the lower common denominator when it comes to CPU architecture and supported features.
I have it as a VM, some people do it on bare metal. I’m trying to to have multiple CPU architectures supported by cheating a bit with BTRFS snapshots at the moment; time will tell if it works out.
Got it.
Never got into btrfs I see the value in it like something crashes or goes down you can go back to that snapshot and everything comes down but I just never really had issues. I distro hop also so i don’t know when I hope its spontaneous. Maybe one of these days I will get back to Arch and play with it
The ability to come back is awesome, although I have never had a reason to use it.
For a distro hopper like yourself it would actually make like so much easier! Because of how subvolumes work - you can have every distro in a separate subvolume. They can share the
home
subvolume if you like, or not. You can have upgrades with a failsafe of sorts for the likes of Ubuntu, which, in my limited personal experience, have never ever been without issues.Having a
server
subvolume to run portage in and then snapshotting it to adesktop
one, applying desktop config saves some time on recompiling the big friends like gcc and llvm.I did not understand the point of BTRFS at first as well, especially since it was slower than ext4. But since having started using it I’ve found that there are now scenarios that were not possible before or were incredibly complicated. Like read-only root, incremental backups over the network (yes, rsync exists, but this feels cleaner)