By 2025, more than 90% of all enterprises will have an automation architect on their books, according to Gartner.1 That’s up from less than 20% today. Why? Because more businesses will start to see the value of automation and AI in the data center and what it can do. Having an intelligent infrastructure that can monitor itself and self-heal when needed is a powerful capability, but the journey to intelligent infrastructure will be one that takes place gradually for many businesses.
Hitachi Vantara recently held a hybrid -loud roundtable event, with attendance from CIOs from a range of businesses in various sectors. When asked how many people already had a dedicated automation architect, the responses were mixed. One attendee discussed the need to assess the processes first, not necessarily the technology. Finding the right tool can only come as a response to finding out what problem the tool is trying to address.
The needs of the wider business have to be at the forefront throughout any automation project. A prevailing opinion identified the need for there to be a business architect who plays a key role. Some automation projects have failed because IT teams have tried to drive them from the inside out. In doing so, they didn’t get the buy-in from the business, which didn’t see the value. It shouldn’t therefore be the case of anointing an automation architect at all costs, but there should be a business case and a perceived value in the organization.
Even some of the attendees who didn’t currently have an automation architect could see the direction things are moving in that area, and it is towards unlocking greater levels of resiliency. Over recent years, telephony platforms have become more intelligent. So have network platforms. Increasingly, the providers of these systems are incorporating the ability to self-monitor and self-heal. Consequently, companies are becoming more expectant of this functionality being included in the services they buy.
AI in the Data Center
AI differs from hardware assets because it gets better over time. Unlike storage or network solutions that reach end of life in half a decade, AI becomes more powerful with increased use. There are two broad categories of AI: domain-centric AI, which follows a particular product; and domain-agnostic AI, which combines information from different vendors into one self-service portal. This type of approach to intelligence can hold large benefits for enterprises looking to make their IT infrastructure more efficient and less susceptible to risk.
For instance, you may want to spin up five virtual machines and the VMware software might tell you to run the virtual machines in 20 specific nodes. At this point, the storage system could kick in and say that it knows the workload of those 20 nodes. Not only would it know that deployment could cause performance issues, but it would also know when those issues would be most likely to happen. These are the insights that will help companies to make the best decisions for their production environments based on reliable historical data.
Some of the attendees told their own stories of the journey they are going on to maximize AI in the data center. There were stories of how teams have created centralized data stores to look for trends and correlations across the estate. Some are on their way to moving to the cloud, but this shift doesn’t come without its complications, particularly if there’s a need to move applications to the cloud and make them more robust, intelligent and easier for providers to support. Moving to the cloud needs a cloud mentality. As such, a change in culture among delivery teams helps to unlock the most benefits.
But what do these benefits look like? Although cost was often trumpeted as a major advantage, most companies now appreciate the fact that resilience and customer experience are the biggest reasons to opt for cloud platforms. That’s not to say that there aren’t cost benefits to be had. One IT lead told the roundtable event that, thanks to the cloud, the lead time to build applications during the pandemic was eight or nine days. They got to market quicker with their apps and there was a reduced cost when compared with the lengthy development cycles they were used to previously.
The conversation also shifted to containerized applications. Some people exercised caution on the topic because of the approach they are taking to cloud migration. There was general agreement about the danger of lift and shift because of the risk of merely moving existing problems to the cloud instead of fixing them first. One attendee reported that they were prioritizing containers for cloud-native applications.
To come full circle, the discussion returned to the fact that the only language understood by boards is that of business. There must be a metrics-based narrative that has to be presented to the rest of the business if they are to be on board with new approaches and new solutions. This is a challenge that needs to be addressed, but it doesn’t just serve to gain the backing of the board: It also helps to chart the journey with tangible benefits that can be measured over time.
The feedback that Hitachi Vantara receives about containers from CIOs across EMEA is that it enables them to create 20 apps in the time it would have previously taken to create a monolithic application. By creating building blocks of microservices that can be reused as and when needed, businesses can be more agile. This is where the app-first infrastructure is playing a role to ensure a simplified and unified application infrastructure that breaks down silos in the organization and across application developers, as well operations and security teams — with the right automation built in. With an increased reliance on applications in the future world of business, this approach will be a valuable tool.
Tom Christensen is Global Technology Advisor and Executive Analyst at Hitachi Vantara.
1Gartner “Enterprise Storage as a Service Is Transforming IT Operating Models.” Vogel, Jeff, and Robert Preston , 02 March 2021.