Hi, I recently acquired a pretty solid VPS for a good price, and right now I use it to run Caddy for two personal sites. When I moved to Lemmy I found about this awesome community and it got me really interested in selfhosting. I won’t be asking for tips on what to selfhost (but feel free to add what you use), there’s a lot of posts about it to look through, but I was wondering: how are you accessing your selfhosted stuff? I would love to have some sort of dashboard with monitoring and statuses of all my services, so should I just setup WireGuard and then access everything locally? I wanted to have it behind a domain, how would I achieve it? E.g. my public site would be at example.com and my dashboard behind dash.example.com, but only accessible locally through a VPN.

I started to learn Docker when setting up my Caddy server, so I’m still really new to this stuff. Are there any major no-no things a newbie might do with Docker/selfhosting that I should avoid?

I’m really looking forward to setting everything up once I have it planned out, that’s the most fun part for me, the troubleshooting and fixing all the small errors and stuff. So, thank you for your help and ideas, I can share my setup when it’s done.

  • cybersandwich@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 year ago

    The major think noobs tend to mess up with docker is not setting up volumes properly so when you get rid of the instance, you lose all of your data.

    I also highly recommend docker-compose for ease of use.

    Id recommend looking up security best practices for docker as well. Things like setting a user id & gid for the containers add an additional layer of security.

    Oh and make sure you get your containers from trustworthy sources.

    • Moonrise2473
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      exactly, when for example the nextcloud documentation says:

      To start the container type: docker run -d -p 8080:80 nextcloud

      is not exactly clear that all the data will be 100% lost when the docker container is closed

      And when it says more down in the docs “just use volumes to persist data” - yeah how to backup those volumes? No mention at all…

      Should tell to mount a directory rather than a volume. Backup a directory is easy and everyone can do it, backup a docker volume, good luck, your data has an invisible time bomb

      • Mike@fikaverse.club
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        @Moonrise2473 @cybersandwich I both agree and disagree… but always use named volumes. Easier to manage/monitor your volumes then use an <backup-container>, maybe rclone, that shares the same volume and sends the data to some safe place

        or, if you still prefer, in your named volume section tell docker to use a custom path.

        volumes:
        myvolume:
        driver: local
        driver_opts:
        type: none
        o: bind
        device: /host/path/to/volume

    • SniffBark@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I immediately started with using docker-compose because I was playing with a “playground” server from my provider and I wanted to be able to move my setup to the “production” server after setting things up. It’s much easier than the long docker run commands some docs suggest.

      One question about the UID and GID, I’ve run into some trouble because the official Caddy image runs as root, so I had to set php-fpm also as root because otherwise it was causing problem. So what do you suggest to do with all my containers (I do not mean Caddy and php right now)? Should I run everything as the same UID and GID, or every container with it’s own user?

    • Szwendacz@kbin.maciej.cloud
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I would not recommend docker-compose for a begginer. As first, one should learn basics, then optionally switch to docker-compose to automate stuff he already know. Also bind mount volumes are a better solution for long term storage than default volumes, since docker will never delete those, and their path in host system is configurable.

  • CriticalMiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 year ago

    I have my 22 port opened on IPv6 only and I can only authenticate with my private keys, which are all added in .ssh/authorized_keys. Fail2ban is configured to keep the bots out but the ban log is empty because there are either no bots operating on IPv6 yet or my IP is so far out of reach it will take the bot a millenium to get to my address.

    Some set up WireGuard or another VPN protocol but I like having everything within reach as long as the device I’m carrying has my key on it.

    One thing you should avoid is opening your docker containers to the web. If your VPS isn’t behind a NAT (they usually aren’t) becareful when binding ports which usually bypasses whatever firewall configuration you may have because docker writes it’s changes directly to nftables.

    https://docs.docker.com/network/#published-ports

    Other then that, remember that this is just a hobby (for now) and take a break when something doesn’t work or you don’t understand it. I personally did a lot of mistakes because I was just eager to finish something and I was rushing it.

    • hi_its_me@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      That last paragraph is great advice. I get so frustrated at times. Sometimes it feels like I need to fix things ASAP when I’m reality it doesn’t matter. In many cases coming back with fresh eyes helps considerably.

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Accidentally exposing a database port when you deploy a database container has bitten so many asses. ElasticSearch and MongoDB were famous for this, so many databases exposed to the internet without authentication because the owners didn’t know docker can bypass iptables-based firewall when assigning ports and ElasticSearch and MongoDB weren’t ship with authentication enabled back then.

  • axzxc1236@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    some sort of dashboard with monitoring and statuses of all my services

    See if Uptime Kuma suits your needs.

    Are there any major no-no things a newbie might do with Docker/selfhosting that I should avoid?

    Allow password based SSH authentication, you should look into key based authentication

    I wanted to have it behind a domain, how would I achieve it?

    Use a reverse proxy (like caddy) which serves different content based on domain name.

  • daFRAKKINpope@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I host in the way that you describe: “service.domain.com”. I use Cloudflare, docker, and Caddy.

    I don’t remember any pit falls off the top of my head. Make sure to use HTTPS (port 443). Everything on http is basically open for everyone to see. Caddy should set that up for you automatically, tho. I recently moved to Caddy from Traefik, it’s an awesome tool.

    Oh, here’s a pitfall. One time I opened a port, #22, for ssh access to my server. I installed fail2ban on my server. One weekend I looked at my logs and found I’d banned hundreds of IP addresses. Some bot found my open port and then begun attacking the login with some kinda rainbow table. I moved the port from the ssh default to something else and never had a problem since.

    Also, and this isn’t a requirement but just useful, I set up a VLAN for my selfhosted server. It’s firewalled from my local network. That way, if someone access’ my server they don’t have access to my whole network.

    So, tldr, have fun and midigate risk where you can.

    • SniffBark@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yeah, I love Caddy so much. I’ve only ever used Nginx before, and it was a pain to configure. With Caddy, it’s just a few lines, and the automatic HTTPS is very nice.

      Thanks for the SSH port tip, I’ve disabled password auth on all my servers before and only used key auth, but I will move the port to something other for extra security.

  • himazawa@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    Don’t expose anything from your local network to the internet (unless you want multiple new sysadmins in your house). Try tailscale instead.

    • Szwendacz@kbin.maciej.cloud
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Technically any connection made from inside your local network can expose it to the outside world for someone outside. Browsing web, some nasty js and here you go.
      I personally have some stuff hosted on my home hardware, cant share details obviously, but even the ip address of those services is not my home ip address. Also extensive use of rootless containers and other cool stuff is making me want to keep things like that.

      • himazawa@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        The difference is that you need way more interaction. Expose a webserver on the internet and check how many requests you get from just bots.

        You can control what you navigate and how to interact with the outside world, but you can’t control how the outside world will interact with your services.