The elastic capabilities of public cloud computing are both good and bad. We can provision all the resources and cloud architects to solve performance issues. But tossing money at the situation willy-nilly doesn’t fix the root causes of the problems, which are typically traceable to bad design or bad coding.
However some basic tricks of the cloud application development and design trade should allow you to at least reduce the number of performance issues (and by extension, trim monthly cloud bills as well). Here are three of the top ways to improve performance and cost efficiency and not break the bank:
Leverage serverless computing. Although many report larger cloud bills when running serverless, that has not been my experience. Indeed, if you’re unwilling or not budgeted to go into the application code and database schema to fine-tune performance issues, this approach leaves the resource provisioning to the cloud providers (using automation). They know how to optimize their resources better than you do.
The idea is that the serverless system won’t over- or underprovision, based on the real-time resource requirements of the application, as determined by the serverless system. Optimization of cloud resources will be the responsibility of the serverless platform itself; many will find that unoptimized applications and databases will better perform if refactored for serverless.
If you’re thinking that this is just technological duct tape, you’re right. This assumes that you can’t or won’t redesign, recode, and redeploy the applications, but you can port to a serverless platform. The cost savings should pay for moving to serverless in the first month, but you should gather metrics before and after to make sure.
Co-locate your data with your application. It’s a fundamental architectural principle to store data as close to the application using it as possible. I’m still seeing a number of databases stored in different regions than the applications, databases stored on different clouds, even on-premises databases with applications running in the public clouds. The result is more latency, poorer performance, more outages, and higher compute and network traffic bills.
Many of these design flaws were driven by concerns about security and compliance. Even if your data needs to stay in country, or even on-premises, make sure that the application instances that use that data exist close to it. You’ll be happier with the cloud bill, and your users will be much happier with the performance and resiliency.
Audit the applications. This is the most laborious and thus the most expensive of all three tricks, although the level of effort required has fallen in the past several years.
Code scanners, application profilers, and applications testing platforms can reveal a great deal of information about how applications can be refactored to increase performance by as much as three-fold. Of course this means recoding the applications in places, including testing and redeployment, which makes this approach the most expensive. But it’s also the most effective. See how that works?
Performance and optimization issues are like back pain—we’re all going to encounter it at one time in our lives. When you do, any of these tricks will be a good place to find relief.