• Moonrise2473
    link
    fedilink
    arrow-up
    140
    ·
    1 year ago

    What’s the point of primary and secondary backups if they can be accessed with the same credentials on the same network

    • CrateDane@feddit.dk
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      They weren’t normally on the same network, but were accidentally put on the same network during migration.

    • snaptastic@beehaw.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      What’s the correct way to implement it so that it can still be automated? Credentials that can write new backups but not delete existing ones?

      • Haui@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        30
        ·
        1 year ago

        I don’t know if it is the „correct“ way but I do it the other way around. I have a server and a backup server. Server user can‘t even see backup server but packs a backup, backup server pulls the data with read only access, main server deletes backup, done.

      • VerifiablyMrWonka@kbin.social
        link
        fedilink
        arrow-up
        14
        ·
        1 year ago

        For an organisation hosting as many companies data as this one I’d expect automated tape at a minimum. Of course, if the attacker had the time to start messing with the tape that’s lost as well but it’s unlikely.

        • Moonrise2473
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          It depends what’s the pricing. For example ovh didn’t keep any extra backup when their datacenter took fire. But if a customer paid for backup, it was kept off-site and was recovered

          It might be even pretending to be a big hosting company when they’re actually renting a dozen deds from a big player, much cheaper than maintaining a data center with 99.999% uptime

      • rentar42@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Fundamentally there’s no need for the user/account that saves the backup somewhere to be able to read let alone change/delete it.

        So ideally you have “write-only” credentials that can only append/add new files.

        How exactly that is implemented depends on the tech. S3 and S3 compatible systems can often be configured that data straight up can’t be deleted from a bucket at all.

      • Moonrise2473
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        i use immutable objects on backblaze b2

        from command line using their tool is something like b2 sync SOURCE BUCKET

        and from the bucket setting disable object deletion

        also borgbase allows this, backups can be created but deletions/overwrites are not permanent (unless you enabled them)