Servers: Scaling Up Vs. Scaling Out
If your business is growing in a healthy way, an important decision is likely to loom on your horizon, one that may make huge differences to both your capabilities and the complexity of your hardware infrastructure.
When it comes to servers, the debate that is currently raging with a fierce intensity regarding issues of scalability, namely whether you should ‘scale up’ or ‘scale out’.
Before we get on to the more intricate details of this debate, let’s set our stall and define what we mean by scalability.
What is scalability?
Scalability one of those annoying words that is often used differently by different people, so I want to start by nailing the concept down so it doesn’t wriggle around too much when we start dissecting it.
Firstly, scalability in my mind is not the same as reliability. System reliability in this context is a server’s ability to handle small increases in workload and degrades smoothly rather then abruptly breaking. Scalability is concerned with the actions you take when you reach the limits of your server’s capacity due to operational growth.
Scalability is the ability of your server architecture to grow in accordance with the increasing amount of work that is required of it. While it is a difficult concept to define, we can use an abstract example here to help us.
A scalable system in this context is one where the performance increases as hardware is added. A business application that can service 4 users on a lone single processor system is scalable if after upgrading to a 4 processor system it can now service 15 users. If we had added the extra processors and could still only service 4 users then our system is not scalable.
Scaling up or scaling out?
So if you do need to increase the capacity to your server system by adding more resources, you face two options.
Scaling up means that you simply buy a bigger, meatier server. So if you have reached the limits of your 64 GB of ram and 8 CPU server and you decide to scale up, your next move will be to go and get yourself a hulking 512 GB and 32 CPU behemoth of a machine. This is the most common way that businesses scale their servers.
Scaling out on the other hand, means that instead of going out and buying yourself a more powerful server, you spread your operations over a diffused number of similar sized servers. The idea here is to decentralize your operations by adding more nodes to your network.
But which is better?
There is no simple answer to this question unfortunately, which strategy is better will depend on your business and its resources, although both have their inherent downsides.
The major criticism that is leveled at scaling up is the potential costs involved. If the reason behind wanting to scale up is a lack of memory, then the process will be easy but is likely to be expensive.
As an example, take the potential memory upgrades for the HP DL785. Here 8GB of memory is $299 while 16GB is $3287, so to double your memory your costs skyrocket immediately. The same is going to true if you want to scale up your CPU or storage.
Scaling out at first seems to be a better fit here. How many 8GB servers could you buy for the amount of money you would end up spending on scaling up? You are also likely to have more CPUs, peak watts and disk space as well.
So what’s the problem?
The problem is that there are a number of hidden costs that are associated with scaling out that you may not consider at first. While you may get more bang for your initial buck, you are going to end up paying substantially more on operating system and SQL licensing costs, because you are having to pay for each of your new machines. You are also going to be paying to power lots of different pieces of hardware rather than just one larger piece.
These long term costs may not matter if your business expansion leads to a significant rise in profits, but that is submitting your livelihood to the gods of business fate.
So I hope I have cleared up a few things with regards to this contentious issue while simultaneously going out of my way to be as balanced as possible, but there really is no clear cut better option.
Does anyone have any tips, anecdotes or advice to help people decide which path is best for them?
James Duval writes for Hardware about server system solutions, and many other things beginning with an ‘s’.