All Blogs

Data-Driven and Cloud-Ready Infrastructure

Michael Zimmerman Michael Zimmerman
Managing Editor, Insights, Hitachi Vantara

February 23, 2022

Q&A with Russell Skingsley

The road to the data-driven enterprise is not for the faint of heart. The continuous waves of data pounding into ever-complex hybrid environments only compound the ongoing challenges of management, governance, security, skills, and rising costs, to name a few. But Hitachi Vantara has developed a path forward that combines cloud-ready infrastructure, cloud consulting and managed services to optimize applications for resiliency and performance, and automated dataops innovations. This holistic approach establishes a protected digital core that stretches from the data center, through the cloud, to the edge. The result is continuous access and availability to the right data at the right time for analysis and insights to accelerate data-driven decision making. Insights Managing Editor, Michael Zimmerman caught up with Russell Skingsley, Global Vice President, Technical Sales at Hitachi Vantara to better understand the cloud-ready infrastructure pillar of the strategy. 

Q1. What role does infrastructure play in a data-driven enterprise that people may not understand or consider?

Russell Skingsley

Given the current narrative around cloud, virtualization, containerization and software defined storage, the casual observer might be forgiven for forgetting that all of these things must run on something physical.  The “cloud” is a good analog to describe large resource pools available through ubiquitous access but these large resource pools are built on physical infrastructure.

Similarly you can’t have a data driven enterprise without the collection of data and you can’t have a collection of data without somewhere to store it. In the same way, you can’t analyze data without processing power and you cannot have processing power without computers.

The storage devices that house data, the computers that process and analyze that data and the networks that facilitate access to both are all in the category of infrastructure.

So obviously infrastructure is fundamental to the ability of an enterprise to be data-driven. Hitachi has been in the business of that type of infrastructure for over 50 years now – an almost unmatched history amongst our peers probably outside one or two competitors at most.

I think the heart of the question though is more about how infrastructure has “evolved” to be more adapted to data-driven enterprise requirements and this is all about the operational agility of that infrastructure to be brought to bear on making data-driven decisions.

What people may not fully appreciate is that while computers have become faster and storage has become larger, the key to making data driven decisions is about the ability to provide the easiest access to these resources to the people closest to the business – not necessarily to the people who understand computers the most.

At Hitachi we see our role as continuing to provide the best in breed underpinnings while simultaneously building a bridge of accessibility to the power that brings to people who understand the business best.

Q2. How can infrastructure be leveraged to support cloud environments?

All clouds are ultimately built on infrastructure so all clouds are leveraging infrastructure but I think the real question here is how are different types of infrastructure suited to different elements of cloud architecture.

One needs to remember the predominant cloud model today is “hybrid cloud” this is a term that under the originating NIST definitions really covered the use of any two islands of cloud, be they public, private or community clouds with some kind of data and/or workload mobility between them.

It’s probably fair to say that common usage in recent times has added the term “multicloud” to describe this type of any combination of cloud types and “hybrid cloud” has come to imply one of those elements being homed “on premise”.  In that sense I would suggest for most of our customers “hybrid cloud” generally means an on premise private cloud in combination with some public cloud services.

In that context the leveraging of infrastructure for the on premise elements are quite different from the public cloud elements.  In the public cloud elements the characteristics of the infrastructure are about being cost-efficient, scalable, repeatable, automatable and frankly each element being “good enough” as the scale and uniformity will cover any individual weaknesses.

For the on premise elements though, the infrastructure requirements are quite different. Each element needs to be performant, reliable, predictable and able to deal with bespoke enterprise requirements.  For example a storage array on premise may provide a 100% Data Availability Guarantee in its own right but a public cloud storage system may be built out of commodity devices that may not provide a 100% Data Availability Guarantee in its own right, but as part of a massively scaled system, be deemed good enough.

So it’s important to realize for hybrid cloud, different parts of the infrastructure have different characteristics but what we need to try to achieve as much as possible is abstraction of these attributes in the eyes of  the users so that they have the same agile, self-serve experience regardless of theses underlying differences that infrastructure locality might bring.

Q3. What is Hitachi Vantara doing about it?

We’ve been designing our solutions, from our storage to our converged and hyperconverged systems, to be increasingly “cloud-ready” and to integrate with the most important players in the most common cloud ecosystems.

That means developing systems from a cloud-centric point of view that not only support cloud-based operations but are optimized to run within them, regardless of where the underlying infrastructure is found.

As such, our new storage and converged systems are integrated with the most dominant cloud related offerings and open technologies.  For example, for clouds built with microservices and containerized applications in mind we support Kubernetes, Red Hat OpenShift, and VMware Tanzu for container orchestration and robust application development.

We’ve also partnered with Equinix to provide extended hybrid capabilities. The new Hitachi Cloud Connect for Equinix brings a “near-cloud” back-up solution to on-prem environments for quick and easy cloud-based resiliency.  It also allows customers to utilize an anchor point for storage outside the public cloud providers but connected to them as a recognition that sometimes public cloud makes sense for compute, but not always for long lived and growing storage.

On top of all of this our new Hitachi Application Reliability Services handle everything from designing a cloud strategy to modernizing, migrating and operating cloud workloads for resiliency, costs and performance.

Q4. How do these infrastructure innovations work with the other new solutions and services from Hitachi Vantara?

Hitachi Vantara has architected a holistic vision for modernizing the digital core that enables companies to process data and workloads anywhere across their enterprise for availability and access to data and insights.

For example, all our new systems are aligned and optimized to work with our new Hitachi Application Reliability Services as well as our new Infrastructure Automation Platform, also announced today.

The IAP is designed to provide a “touchless” infrastructure for hyper-automation of datacenter infrastructure that includes monitoring, reporting and incident remediation, for greater reliability and efficiency.

As we said at the top of these questions – all of this “cloudiness” needs to run somewhere and all of this data needs to be stored somewhere and so it all starts with a modern digital core which is supported by our new Hitachi Vantara systems, like the new Virtual Storage Platform (VSP) E1090, an all-new enterprise-class midrange flash system.

Also announced today is the new Virtual Storage Software (VSS) for Block, a software-defined distributed data platform for open block-based workloads running on commodity x86 servers. This scale-out solution scales performance and capacity as requirements grow, running in VMware.

In addition, we added the new Unified Compute Platform RS, a flexible hybrid cloud platform powered by VMware Cloud Foundation with Tanzu Kubernetes services designed to improve business agility across on-premises and public clouds.

And so it’s clear, we have a continuing commitment to hybrid cloud architecture starting with mission critical on premise elements, embracing the public cloud elements and providing simplified and uniform access to all of it.

Q5. What’s the main message behind today’s infrastructure news?

Our mission is to help enterprises modernize their digital core by bringing advanced computing power and data storage to the data wherever it resides – in the datacenter, the public cloud, at the edge, or all of the above.

Digital transformations can be daunting, but the results are lasting. With the ever-complicated enterprise, improving availability, accessibility, scalability, management, and security to data is paramount to success.

We’ve been working with our customers on evolving enterprise IT for over 50 years, we are perfectly placed to help them add cloud in an optimal, location independent way to that architecture to help provide a foundation for their Digital Transformation.

Related News

Be sure to check out Insights for perspectives on the data-driven world.

Michael Zimmerman

Michael Zimmerman

Mike is managing editor of thought leadership, including Hitachi Vantara Insights and corporate Newsroom. Before joining the company, he spent +25 years in journalism and communications, working  with edit teams and business leaders to craft stories of import and interest.