i’ve instaled opensuse tumbleweed a bunch of times in the last few years, but i always used ext4 instead of btrfs because of previous bad experiences with it nearly a decade ago. every time, with no exceptions, the partition would crap itself into an irrecoverable state

this time around i figured that, since so many years had passed since i last tried btrfs, the filesystem would be in a more reliable state, so i decided to try it again on a new opensuse installation. already, right after installation, os-prober failed to setup opensuse’s entry in grub, but maybe that’s on me, since my main system is debian (turns out the problem was due to btrfs snapshots)

anyway, after a little more than a week, the partition turned read-only in the middle of a large compilation and then, after i rebooted, the partition died and was irrecoverable. could be due to some bad block or read failure from the hdd (it is supposedly brand new, but i guess it could be busted), but shit like this never happens to me on extfs, even if the hdd is literally dying. also, i have an ext4 and an ufs partition in the same hdd without any issues.

even if we suppose this is the hardware’s fault and not btrfs’s, should a file system be a little bit more resilient than that? at this rate, i feel like a cosmic ray could set off a btrfs corruption. i hear people claim all the time how mature btrfs is and that it no longer makes sense to create new ext4 partitions, but either i’m extremely unlucky with btrfs or the system is in fucking perpetual beta state and it will never change because it is just good enough for companies who can just, in the case of a partition failure, can just quickly switch the old hdd for a new one and copy the nightly backup over to it

in any case, i am never going to touch btrfs ever again and i’m always going to advise people to choose ext4 instead of btrfs

  • dwt@feddit.org
    link
    fedilink
    Deutsch
    arrow-up
    11
    ·
    11 days ago

    You know, protecting against Powerloss was the major feature of filesystems in a time gone by…

    • Atemu@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      10 days ago

      It only works if the hardware doesn’t lie about write barriers. If it says it’s written some sectors, btrfs assumes that reading any of those sectors will return the written data rather than the data that was there before. What’s important here isn’t that the data will forever stay in-tact but ordering. Once a metadata generation has been written to disk, btrfs waits on the write barrier and only updates the superblock (the final metadata “root”) afterwards.

      If the system loses power while the metadata generation is being written, all is well because the superblock still points at the old generation as the write barrier hasn’t passed yet. On the next boot, btrfs will simply continue with the previous generation referenced in the superblock which is fully committed.
      If the hardware lied about the write barrier before the superblock update though (i.e. for performance reasons) and has only written e.g. half of the sectors containing the metadata generation but did write the superblock, that would be an inconsistent state which btrfs cannot trivially recover from.

      If that promise is broken, there’s nothing btrfs (or ZFS for that matter) can do. Software cannot reliably protect against this failure mode.
      You could mitigate it by waiting some amount of time which would reduce (but not eliminate) the risk of the data before the barrier not being written yet but that would also make every commit take that much longer which would kill performance.

      It can reliably protect against power loss (bugs not withstanding) but only if the hardware doesn’t lie about some basic guarantees.

      • FuckBigTech347@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        ·
        9 days ago

        I had a drive where data would get silently corrupted after some time no matter what filesystem was on it. Machine’s RAM tested fine. Turned out the write cache on the drive was bad! I was able to “fix” it by disabling the cache via hdparm until I was able to replace that drive.