• 39 Posts
  • 2.94K Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle










  • TCB13@lemmy.worldtoLinux@lemmy.mlThe Dislike to Ubuntu
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    2
    ·
    edit-2
    12 days ago

    The thing with Ubuntu / Canonical isn’t that it doesn’t work, it is that they’ve bad policies and by using their stuff you’re making yourself vulnerable to something akin to what happened with VMWare ESXi or with CentOS licensing - they may change their mind at some point and you’ll be left with a pile of machines and little to no time to move to other solution.

    For starters Ubuntu is the only serious and corporate-backed distribution to ever release a major version on the website and have the ISO installer broken for a few days.

    Ubuntu’s kernel is also a dumpster fire of hacks waiting for someone upstream to implement things properly so they can backport them and ditch their own implementations. We’ve seen this multiple times, shiftfs vs VFS idmap shifting is a great example of the issue.

    Canonical has contributing to open-source for a long time, but have you heard about what happened with LXD/LXC? LXC was made with significant investments, primarily from IBM and Canonical. LXD was later developed as an independent project under the Linux Containers umbrella, also funded by Canonical. Everything seemed to be progressing well until last year when Canonical announced that LXD would no longer remain an independent project. They removed it from the Linux Containers project and brought it under in-house development.

    They effectively took control of the codebase, changed repositories, relicensed previous contributions under a more restrictive license. To complicate matters, they required all contributors to sign a contract with new limitations and impositions. This shift has caused concerns, but most importantly LXD became essentially a closed-off in-house project of Canonical.

    Some people may be annoyed at Snaps as well but I won’t get into that.







  • but without nix it’s a pita to maintain through restores/rebuilds.

    No it isn’t. You can even define those routing polices in your systemd network unit alongside the network interface config and it will manage it all for you.

    If you aren’t comfortable with systemd, you can also use simple “ip” and “route” commands to accomplish that, add everything to a startup script and done.

    major benefit to using a contained VPN or gluetun is that you can be selective on what apps use the VPN.

    Systemd can do that for you as well, you can tell that a certain service only has access to the wg network interface while others can use eth0 or wtv.

    More classic ip/route can also be used for that, you can create a routing table for programs that you want to force to be on the VPN and other for the ones you want to use your LAN directly. Set those to bind to the respective interface and the routing tables will take place and send the traffic to the right place.

    You’re using docker or similar, to make things simpler you can also create a network bridge for containers that you want to restrict to the VPN and another for everything else. Then you set the container to use one or the other bridge.

    There are multiple ways to get this done, throwing more containers, like gluetun and dragging xyz dependencies and opinionated configurations from somewhere isn’t the only one, nor the most performant for sure. Linux is designed to handle this cases.