When they migrate workloads from on-premises environments to the cloud, organizations expect to become nimbler while reducing overall operating and capital expenses. That’s not always the outcome. In fact, a recent survey of business and IT professionals at large enterprises across 11 industries and 17 countries found that most companies aren’t getting anything close to the full value they expected after migrating to the cloud.
But help is on the way.
Earlier this year, Hitachi unveiled a broad plan to help customers optimize their cloud workloads for resiliency and reliability – and to dramatically bring down their total cost of ownership. Dedicated cloud specialists at physical and virtual Hitachi Application Reliability Centers (HARC) around the world will help clients adopt engineering-led operations to build and manage their hybrid and multicloud workloads using modern engineering best practices and frameworks. DevSecOps principles and FinOps strategies help strengthen engineering resilience, improve application reliability and security and optimize cloud costs.
The first physical HARC began operations in Hyderabad, India. And just this week, the first U.S. physical HARC opened its doors in Texas.
From his Dallas headquarters, we spoke recently with Marimuthu Muthusamy, who runs global delivery services for physical and virtual HARCs to learn more about the operation.
Q1: Give us the basic idea underpinning Hitachi Application Reliability Centers.
Given the complexity around hybrid and multicloud management, Hitachi Application Reliability Centers (HARC) is all about helping clients build, run, operate and optimize their cloud workloads through virtual and physical centers across the globe. With the physical HARCs – one in India, another now in Dallas and a third upcoming in Europe – we’re deploying a “follow the sun” model where we offer 24×7 always-on support.
Think about your home security system that monitors your home, creates the right alerts and signals for help as needed, all while you’re peacefully enjoying a family trip to the movies or a vacation. We all want the freedom to live our lives, knowing our most valuable assets are taken care of. And increasingly enterprise customers want that freedom for their cloud workloads—and with predictable, reasonable costs.
Q2: What are customers telling you about cloud costs?
One client we’re working with thought that moving into cloud would bring down the company’s overall OpEx and CapEx costs. The reality was completely the opposite and many companies across the industry are experiencing similar results. In fact, it’s been estimated that companies will look to cut cloud waste by 50 percent, even as they increase investing in it.
Q3: Besides cost, what do cloud customers say is important to them?
First is the availability and reliability of applications and workloads. If you consider a few years back, we were dealing with three or four-tier application architectures. Managing availability involved load balancers and external monitors looking at application components and restarting them when they go down. We are now dealing with applications with a microservices architecture and thousands of VM instances hosting thousands of containers, managed by highly distributed clusters across regions. The architectural complexity brings exponential reliability and availability risks. This is where we leverage resilience and chaos engineering principles to engineer fault tolerances and intelligent applications that withstand failures.
Q4: Talk briefly about HARC’s approach.
We understand companies are at different places in their cloud adoption maturity, so we have methodologies, frameworks and tools to help every organization at any stage improve application reliability, cost, security and performance.
One example is that while most of the reliability and cost concerns of cloud workloads arise downstream, during operations, these issues depend on how you design the workload upstream. HARC services help establish an integrated approach between Dev and Ops teams to design workloads to be run optimally. We facilitate a reliability-focused approach by creating a common backlog between Dev and Ops teams.
Underpinning all of this is a bias toward hyper automation for efficiency and productivity gains helping you move from reactive to preventive to predictive stages across your cloud journey to a point where it is part of your culture to design and operate around reliability, availability, security and cost. We have built solution accelerators such as HCAP that observe and automate the operations process, from managing SLOs to incident response.
Q5:When it comes to deciding where to put workloads, what are the key considerations?
The primary considerations for workload location are its associated costs, mission and business criticality, change requirements and architecture, along with any associated technical debt.
Architectures need to be reviewed not only for compatibility but also for cloud-native optimization. Next is security and compliance requirements such as SOC2 and HIPAA, especially when the workload deals with PII (personally identifiable information) and other sensitive data. The next critical consideration is integration with and dependencies on other systems that may also be running in an on-prem datacenter. This needs to be carefully considered to avoid any downtimes during migrations.
Q6: How does HARC compare with the competition?
The idea behind us coming up with HARC is to provide a modern way to operate cloud workloads. While others may use engineering practices such as site-reliability engineering (SRE), DevOps or FinOps in pockets and as standalone services, HARC integrates all these principles and offers a comprehensive workload management function covering cloud infrastructure, application and data.
This comprehensive offering coupled with our hyper automation capabilities and observability robotics platform puts us in a unique position and way ahead of any of our competition.
Q7: HARC combines best practices, automation technology and people. How do you deal with the challenge of finding talent?
One, we are aggressively going to market to hire talent with skills covering cloud, SRE, DevOps, FinOps and security. Two, we are working with our third-party partners and fostering a train and hire model. We work closely with partners to develop a comprehensive training curriculum which they use to train and upskill college graduates in cohorts on specific skills required for HARC. Three, we are creating opportunities for our internal Hitachi employees to leverage and upskill themselves in specific technology areas and be a part of HARC.
Q8: Why did you choose Dallas for HARC?
Dallas is one of the fastest growing hubs for tech talent and provides us an ideal, centralized location to support clients across the Americas.
 IDC FutureScape: Worldwide Cloud 2021 Predictions, Doc # US46420120, October 2020
- Press Release: Hitachi Vantara Opens Application Reliability Center in Dallas
- Insights: Cloud Reliability & the Rise of Engineering – Led Ops