So after we’ve extended the virtual cloud server twice, we’re at the max for the current configuration. And with this crazy growth (almost 12k users!!) even now the server is more and more reaching capacity.

Therefore I decided to order a dedicated server. Same one as used for mastodon.world.

So the bad news… we will need some downtime. Hopefully, not too much. I will prepare the new server, copy (rsync) stuff over, stop Lemmy, do last rsync and change the DNS. If all goes well it would take maybe 10 minutes downtime, 30 at most. (With mastodon.world it took 20 minutes, mainly because of a typo :-) )

For those who would like to donate, to cover server costs, you can do so at our OpenCollective or Patreon

Thanks!

Update The server was migrated. It took around 4 minutes downtime. For those who asked, it now uses a dedicated server with a AMD EPYC 7502P 32 Cores “Rome” CPU and 128GB RAM. Should be enough for now.

I will be tuning the database a bit, so that should give some extra seconds of downtime, but just refresh and it’s back. After that I’ll investigate further to the cause of the slow posting. Thanks @veroxii@lemmy.world for assisting with that.

  • Mango@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    I don’t understand why a dedicated server is a good idea, when the only true way to scale is to use like Kubernetes or Docker and ECS Containers with scale?

    Your just gonna run into more problems, you cannot vertically scale forever.

    • havocpants@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I think the performance bottleneck isn’t the web application, it’s the PostgreSQL database of comments and posts that won’t scale horizontally (easily).

      • Mango@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        True, but these are problems which have been solved. You use a cache in-front of the database and then use replicated instances of the databases with sharding and then use load balancing and connection pooling etc