All insights

Gaining Control of the Hidden Cost of Code

Sai Subramanian Sai Subramanian
Senior Director, Hitachi Application Reliability Centers, Hitachi Digital Services

February 13, 2023


By this time, nearly every business manages some aspect of their operations in the cloud. In fact, 451 Research found that more than 76% of enterprises use two or more clouds. That’s not surprising when considering the number of applications running in the cloud, from on-demand software to infrastructure management, file sharing to data storage, analysis and governance. And let’s not forget about cybersecurity.

It’s wonderful having all these capabilities available to us with the touch of a finger. But, in most cases, a single cloud service provider cannot offer all the solutions a company needs. Furthermore, the differences between providers are significant, with each having its own proprietary solutions. As a result, an application running on AWS may not directly translate to Google’s cloud services, and it may run more effectively on one over the other.

While the multicloud allows us to pick and choose providers, that flexibility comes with its challenges, not the least being increased costs.

When we all began our shift to the cloud roughly two decades ago, one of the top motivations was cost savings. It was a new day. We no longer had to manage and maintain an on-premise infrastructure and all its associated costs.

With the move to multicloud, we flipped cost savings on its head and threw economies of scale out the window. We find ourselves now in a more complex environment, with each solution requiring dedicated teams with subsets of skills to manage different applications on different platforms. And each of these solutions comes with its own costs, driving up spending.

According to Gartner, cloud spending will grow to almost $1.8 trillion in 2025. Many providers offer budgeting tools to make the cost of their services easier to understand. While this allows some visibility into where companies spend their cloud dollars, other costs remain unclear or even unknown, especially in a multicloud environment.

The Hidden Cost of Code

The complexity of the multicloud can have unintended consequences that directly impact the bottom line.

Recently, our team was reviewing the cloud program of a Fortune 100 customer. They found a large portion of their infrastructure going unused and were surprised it even existed. Regardless, the customer was paying for it and, by our estimates, at 30% more than they had to. For a company of its size, this equated to millions of dollars saved.

Unfortunately, these kinds of problems are not uncommon. In this instance, the person coding the server failed to alert the development team about the provisioning, and the costly space went unused.

Whether it’s a lack of awareness or a strong desire to execute quickly, too often, not enough thought goes into the coding “process” and how the code is implemented.

In fact, research from Synopsys found that the cost of poor software quality in the U.S. exceeded $2.41 trillion in 2022, up from $1.31 trillion two years ago. Seeing this, it can’t be understated the impact an inefficient piece of code can have.

Gaining Control

The cost of code refers to the expenses associated with running and maintaining code on a cloud computing platform. Costs can vary depending on factors such as the infrastructure and resources needed to run the code, the complexity of the code and the length of time the code is in use. When these costs aren’t top of mind for everyone operating within the cloud, tremendous value can be lost.

Taking control of hidden costs requires a holistic approach with plenty of assessment, visibility and planning. Not only do we need to be aware of potential cost concerns, but we must also execute quickly when unexpected issues arise.

Our first priority to rein in costs is to fully understand and embrace Financial Optimization (FinOps). As defined by the FinOps Foundation, “FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams collaborate on data-driven spending decisions.”

Basically, FinOps teams are responsible for managing and optimizing the costs associated with running code in the cloud. Using various tools and techniques, FinOps professionals are able to track, analyze and optimize the costs of running the code. They work closely with development teams to ensure the code is designed and implemented in a cost-effective manner. This can include using cost-efficient services, optimizing resources, and automating scaling to minimize costs.

Identifying inefficiencies in code is something we, here at Hitachi Vantara, have been working on closely with our customers and partners over the past several years. In part, our efforts have led to the creation of Hitachi Application Reliability Centers (HARC). HARC combines cloud expertise and industry-leading methodologies to help companies manage their cloud journeys to avoid issues with reliability and availability, and any unforeseen costs.

Cost management remains critical to get the most out of our cloud experience. But without the proper visibility and understanding, efficiencies will continue to be missed, and opportunities will pass us by. 

Establishing a FinOps program with a cost and spending sensibility must be a top priority to succeed in this competitive marketplace.

Sai Subramanian is Chief Architect, Hitachi Application Reliability Centers, Hitachi Vantara.

Related


Sai Subramanian

Sai Subramanian

Sai Subramanian leads a dynamic engineering organization focussed on reliability, resilience and scale for enterprise applications and products. He has deep expertise in new technologies and utilizing an analytical approach to quantity risk.