So… if you go into North Korea for about two weeks you may end up becoming full communist. This is very inline with the recent headlines about China allowing people from certain countries to be there for 15 days without visas… hm…
So… if you go into North Korea for about two weeks you may end up becoming full communist. This is very inline with the recent headlines about China allowing people from certain countries to be there for 15 days without visas… hm…
AirPods work so… how much more proprietary can you get? :P
Yeah because apparently it is too hard to double click on setup.exe but using a docker is okay.
So, looks like tons of HTTP services and SSH.
Great, but what services are you hosting ? What ports you need?
Yeah, those may work. Since you’ve one how does it look like? Are there blocked ports line SMTP? Are the IP good / aren’t blacklisted everywhere already? Thanks.
Yeah, it may be less customizable but at least is fast and error free (unlike NextCloud)
Yeah because making it isn’t only about just waiting for time to pass and money to come it, it is also about compounding.
The thing with Ubuntu / Canonical isn’t that it doesn’t work, it is that they’ve bad policies and by using their stuff you’re making yourself vulnerable to something akin to what happened with VMWare ESXi or with CentOS licensing - they may change their mind at some point and you’ll be left with a pile of machines and little to no time to move to other solution.
For starters Ubuntu is the only serious and corporate-backed distribution to ever release a major version on the website and have the ISO installer broken for a few days.
Ubuntu’s kernel is also a dumpster fire of hacks waiting for someone upstream to implement things properly so they can backport them and ditch their own implementations. We’ve seen this multiple times, shiftfs vs VFS idmap shifting is a great example of the issue.
Canonical has contributing to open-source for a long time, but have you heard about what happened with LXD/LXC? LXC was made with significant investments, primarily from IBM and Canonical. LXD was later developed as an independent project under the Linux Containers umbrella, also funded by Canonical. Everything seemed to be progressing well until last year when Canonical announced that LXD would no longer remain an independent project. They removed it from the Linux Containers project and brought it under in-house development.
They effectively took control of the codebase, changed repositories, relicensed previous contributions under a more restrictive license. To complicate matters, they required all contributors to sign a contract with new limitations and impositions. This shift has caused concerns, but most importantly LXD became essentially a closed-off in-house project of Canonical.
Some people may be annoyed at Snaps as well but I won’t get into that.
Look so damn good, they even seem to know what padding is this time.
Will they learn how to apply padding to stuff this time?
This means I don’t need to mess around with QBT’s “proxy” settings?
No, you don’t. In short, trackers will look at the source address of the incoming connection on their side, that means you VPS IP because you’re doing NAT on the VPS.
Just make sure qBittorrent is restricted to the WG interface and nothing else.
I agree with that, 100% but for the majority of the world how green it is usually depends on how far-left you are.
So… now nuclear is considered “green power”. Okay boomers.
but without nix it’s a pita to maintain through restores/rebuilds.
No it isn’t. You can even define those routing polices in your systemd network unit alongside the network interface config and it will manage it all for you.
If you aren’t comfortable with systemd, you can also use simple “ip” and “route” commands to accomplish that, add everything to a startup script and done.
major benefit to using a contained VPN or gluetun is that you can be selective on what apps use the VPN.
Systemd can do that for you as well, you can tell that a certain service only has access to the wg network interface while others can use eth0 or wtv.
More classic ip/route can also be used for that, you can create a routing table for programs that you want to force to be on the VPN and other for the ones you want to use your LAN directly. Set those to bind to the respective interface and the routing tables will take place and send the traffic to the right place.
You’re using docker or similar, to make things simpler you can also create a network bridge for containers that you want to restrict to the VPN and another for everything else. Then you set the container to use one or the other bridge.
There are multiple ways to get this done, throwing more containers, like gluetun and dragging xyz dependencies and opinionated configurations from somewhere isn’t the only one, nor the most performant for sure. Linux is designed to handle this cases.
In terms of homelab stuff, I know a lot of people appreciate the containerized approach.
What I said applies to containerized setups as well. Same logic, just managed in a slightly different way.
Just fire up Wireshark and inspect what Firefox calls, a lot of calling home and even if you change all the settings and config parameters to something sane it will still contact a 3rd party analytics company. Mozilla also acquired an ad analytics company recently for some reason.
Yeah repositories and FTP don’t include that, but it is kind shady that the first way to get it (website) for the majority of regular users (Windows/macOS) has a unique ID - after all this is the company that goes all in for privacy…
Remember that Apple had to dumb down the macOS version of Pages, Keynote and Numbers at some point (from iWork 09 to single apps) to later on release something even more dumbed down for iOS… and now we have the ultra-dumbed down version for the web.