This article is more than 1 year old

Dumping gear in the public cloud: It's about ease of use, stupid

Look at the numbers - co-location might work out cheaper

Sysadmin blog Public cloud computing has finally started to make sense to me now. A recent conversation with a fellow sysadmin had me rocking back and forth in a corner muttering "that's illogical".

When I emerged from my nervous breakdown I realised that capitalising on the irrationality of the decision-making process within most companies is what makes public cloud computing financially viable.

For certain niche applications, cloud computing makes perfect sense. "You can spin up your workloads and then spin them down when you don't need them" is the traditional line of tripe trotted out by the faithful.

The problem is that you can't actually do this in the real world: the overwhelming majority of companies have quite a few workloads that aren't particularly dynamic. We have these lovely legacy static workloads that sit there and make the meter tick by.

Most companies absolutely do have non-production instances that could be spun down. According to enterprise sysadmins I've spoken to, they feel that many dev and test environments could be turned off approximately 50 per cent of the time. If you consider that there are typically three non-production environments for every production environment, this legitimately could be a set of workloads that would do well in the cloud.

While that is certainly worth consideration, it only really works if it's implemented properly. Even if you can spin some workloads up and down enough to make hosting them in the public cloud cheaper than local, do you know how to automate that? If you don't – or can't – automate some or all of those workloads, are you going to remember to do spin them up as needed? What if you get sick?

For the majority of workloads proposed to be placed in the public cloud, I always seem to be able to design a cheaper local alternative fairly easily. This often applies even to the one workload for which cloud computing is arguably best suited: outsourcing your disaster recovery (DR) setup.

Colocation is still a thing

When I talk about DR with most businesses – big or small – they have a binary view of the world. They see the options as either building their own DR site, or using a public cloud provider. Somewhere in the past five years we seem to have collectively forgotten that a vast range of alternative options exist.

The first and most obvious option is simple colocation. There are any number of data centres in the world that will rent you anything from a few Us of rack space to several racks' worth for peanuts. Or, at least, "peanuts" when compared to the cost of public cloud computing or rolling your own secondary data centre.

In addition to traditional unmanaged colocation, most colocation providers will offer you up dedicated servers. Here they pay the initial capital cost of the hardware and lease it to you along with the rack space. There's also fully managed hosting available for both "you own the hardware" and "you lease the hardware" options.

In almost all cases these colocated solutions are cheaper than a public cloud provider for DR, and DR is the only bulk public cloud workload that I've been able to come close to making financial sense for businesses smaller than a 1000 seat enterprise. (Okay, dev and test under some circumstances can be worth it as well.)

So how is it that so many businesses choose the public cloud? As the debate unfolded I began to realise that the viability of the public cloud has nothing to do with the viability of the economic arguments and everything to do with politics.

More about

TIP US OFF

Send us news


Other stories you might like