Let’s say I have to host 25 websites… How do I know how powerful should my VPS be? Which specs it should have, how fast the connection should be to handle X visits per day?
How do you understand which are your system requirements BEFORE deploying a project? Do you just make estimates and then scale up? Or there’s some kind of tool to benchmark? how to handle this kind of stuff?
It depends on many factors including:
- visits of individual sites
- requirements of each site (memory, I/O, persistent storage, ephemeral storage, caching, databases, etc.)
So you’re right that you make an initial guess and go from there.
Many tools/sites/projects will have minimum system requirements and you can get an idea of minimums using those stats. Some frameworks might even have guidelines available. The one I use most often for example has a configurable memory footprint. So that’s a datapoint I personally use.
If they’re all the same type of site (example Ghost blogs) using the same setups then it’s often less intense since you can pool resources like DBs and caching layers and go below minimum system requirements (which for many sites include a DB as part of the requirements).
Some sites might be higher traffic but use fewer resources, others might be the inverse.
Then there’s also availability. Are these sites for you? Is this for business? What kind of uptime guarantee do you need? How do you want to monitor that uptime and react to needs as they arrive/occur?
The best way to handle this is in a modern context also depends on how much and what style of ops you want to engage in.
Auto-scaling on an orchestration platform (something like K8S) or cloud-provider auto-scaling of VMs or something else? Do you want deployments managed as-code via version control? Or will this be more “click Ops”. No judgement here just a thing that will determine which options are best for you. I do strongly recommend on some kind of codified, automated ops workflow - especially if it’s 25 sites, but even with just a handful. The initial investment will pay for itself very quickly when you need to make changes and are relived to have a blueprint of where you are.
If you want to set it and forget it there are many options but all require some significant initial configuration.
If you’re ok with maintenance, then start with a small instance and some monitoring and go from there.
During setup and staging/testing the worst that can happen is your server runs out of resources and you increase its available resources through whatever method your provider offers. This is where as-code workflows really shine - you can rebuild the whole thing with a few edits and push to version control. The inverse is also true - you can start a bit big and scale down.
Again, finding what works for you is worth some investment (and by works I don’t just mean what runs, but what keeps you sane when things go wrong or need changing).
Even load testing, which you mentioned, is hard to get right and can be challenging to instrument and implement in a way that matches real-world traffic. It’s worth doing for sites that are struggling under load, but it’s not something I’d necessarily suggest starting with. I could be wrong here but I’ve worked for some software firms with huge user bases and you’d be surprised how little load testing is done out there.
Either way it sounds like a fun challenge with lots of opportunities for learning new tricks if you’re up for it.
One thing I recommend avoiding is solutions that induce vendor lock-in - for example use OpenTofu in lieu of something like CloudFormation. If you decide to use something like that in a SaaS platform - try not to rely on the pieces of the puzzle that make it hard (sticky) to switch. Pay for tools that bring you value and save time for sure, but balance that with your ability to change course reasonably quickly if you need to.
You can run a stress test, and compare your desired response times with the resource usage on the server side.
https://en.wikipedia.org/wiki/ApacheBench
Take into account all the requests needed to load a website, and the fact that:
- if it takes more than 2 seconds, about 50% of your visits will leave
- 3 seconds or more, and most people will start thinking it’s down
- Google tries to keep theirs under 500ms
Loading some content in 100ms, then loading more in the background, is a reasonable compromise. You may want a very quick response time for the first few requests, then put the rest on a possibility slower server, or running at a lower priority.
If on the public internet, also consider how your going to integrate with a CDN like Cloudflair. With luck they will serve a lot of the load and provide the DDOS protection.
Cloudflare scares me, they have way too much power. Maybe look for alternatives instead. Best case they are not from the USA, their demand for privacy and security is going down the drain faster than you can blink.
The point is a CDN is worth considering if your site cannot be down and it may be subject to high load or DDOS attacks. There are many CDNs.
That question is going to be impossible to answer without a lot more details. The number of websites is largely irrelevant (each website will use a negligible amount of RAM for the web server process to know about). What you want to know is the total number of HTTP and HTTPS requests per minute (the latter being a bit more expensive) in peak times to estimate the required CPU horsepower, the amount of data transferred (network bandwidth and CPU to some degree), whether it will be mostly static pages or dynamic/scripted content (CPU and RAM), and of course disk space to store everything (a stock photo library will likely use more space than a pizza place).
If there’s a database backend you’ll want to add even more RAM and faster storage (both in terms of throughput and IOPS).
Also, acceptable waiting times. An under-powered server will work just the same, just slower.If you know a bit about the websites you want to host but need some pointers, maybe start by checking out some packages by other hosting providers (how much CPU and RAM does their ‘local chess club WordPress site’ package offer?) and go from there.
Don’t worry about scaling UP. Worry about scaling OUT. Make your service small, you don’t have to go micro but keep it as light as possible. Then when you need to, you can always add another clone of it. And another, and another.
I agree with everything said so far. Just wanna add that starting a little big and then shrinking resources can be less stressful if you value user experience more then some extra expense at launch.