Cloud platforms promise unparalleled speed and ease of application development and deployment. AWS, GCP and Azure offer a dazzling array of services designed to make life easy for their customers’ engineers – whether developers, DevOps or data scientists.
It’s easy to fall into the trap of thinking that, just because you’re using a scalable platform and their latest tools, resource efficiency is a given. This can be a long way from the truth.
To explain why, consider one of the recent innovations in cloud computing to go mainstream: “containerisation”. A container is a package of application code and all its operating dependencies, that can run quickly and reliably across different computing environments.
Containers offer a range of benefits: portability between different platforms and clouds, faster software-delivery cycles, easier implementation of modern, micro services-based software architectures and more efficient use of system resources.
This last point is true in theory. Efficiency gains arise because micros services used in containers are more resource-efficient that monolithic applications run on physical or virtual servers, they start and stop more quickly, and they can be packed more densely and flexibly on their host hardware.
And in practice? That depends entirely on how effectively containers are deployed.
Containers require tools to manage and orchestrate where and when they run within and across servers — or in the case of cloud, virtual server instances.
However, this creates another layer of abstraction between DevOps and the underlying hardware, which ultimately, determines operational cost — and is contributing to creating an increasingly common DevOps problem: infrastructure utilisation is no longer being effectively managed.
Looking deeper, there are three causes of this that Soimplement sees in organisations in varying measures:
The financial implications can be substantial.
It is not uncommon for us to find companies running cloud-container infrastructure that is over-provisioned by a factor of two – sometimes equating to hundreds of thousands, or even millions, of dollars per year. Wasted money that could, no doubt, be allocated to far better value-enhancing causes, not least your EBITDA.
So, how can this be avoided? Here are five actions software firms can take to address this:
If you are non-technical and this is starting to feel complicated, fear not.
Cloud is all about matching compute power to demand. If you’re running more than a handful of virtual servers, there are few reasons why you should not be achieving 50%, or better, 75% utilisation of what you pay for.
Ask your engineering executives these questions:
Deploying applications to the cloud has never been easier. Doing it well — efficiently, securely, reliably and at scale — requires solid engineering disciplines.
It is all too easy for the sophistication of today’s cloud services to obfuscate the basic economics and allow costs to balloon out of control.
Cash is king. Be sure you’re not wasting it.