Hear how automating infrastructure helps IT deliver apps faster, ensure better service, eliminate silos, and increase efficiencies to reduce costs.
Leading Storage Analyst
George Crump of Storage Switzerland is a leading storage analyst, focused on the emerging subjects of big data, solid state storage, virtualization and cloud computing. He is widely recognized for his blogs, white papers and videos on such current approaches as all-flash arrays, deduplication, SSDs, software-defined storage, backup appliances and storage networking. His popular whiteboard sessions, in which he leads key vendors through their solutions to data center problems, have also received tremendous attention.
Director of Product Management Cloud and Converged Solutions at Hitachi Vantara.
Hi, I'm George Crump, Lead Analyst with Storage Switzerland. Thank you for joining us today. I'm here today with Ravi Srinivasan. He is the director of Product Management Cloud and converged solutions at Hitachi Vantara. Ravi, thanks for joining us today.
Nice meeting you.
We want to talk a little bit about IT infrastructure management and what's driving the shift toward automation I mean, let's face it, there is a lot of complexity in data centers with silos of infrastructure which require multiple management tools. Businesses are looking to really simplify that infrastructure as well as to automate IT management and improve efficiencies to reduce cost. How are you guys helping customers with these challenges?
Customers today deploy wide range of application services on top of the Hitachi infrastructure. That ranges from bare metal or virtualized or containers. They leverage private cloud or hybrid cloud. It's pretty complex infrastructure that needs to be enabled for these kinds of application services. And for that we need to provide more assurance, providing a validated design of the solutions. And on top of it, we want to provide a tool, management tool, Hitachi Unified Compute Platform Advisor (UCP Advisor), which provides infrastructure and automation, enabling a consistent infrastructure and services. What we do is to provide a policy-based automation, in order to enable on-demand services. When these capabilities are used as part of the API, then they leverage automation suite or radar CloudForms, to basically leverage and build out the infrastructure services on the service catalogs.
Okay. And, and a lot of that functionality is included with the environment, correct?
Yes. Most of the time we see all these tools already exist in an environment and we come and play with the rich set of rest APIs that we enable. It allows them to integrate and get going on it.
Okay. That makes sense. The other thing is, it's really not just the infrastructure build teams I think that are struggling, but just look at the operation center. They're just inundated with service tickets and alarms. How is Hitachi helping these teams?
We are very cognizant of who our users are. When we look at our stakeholders of these users, they include the operations teams, and we know that most of the time they spend is for upkeep of these business services. As such, we want to make sure that we can help them establish or improve their closed-loop incident management and processes. So, the UCP Advisor management tool provides those set of APIs, the policy-based management where they can provide and enable to reduce and resolve a problem. We want to make sure that we can leverage and enable these capabilities for those IT operators.
Talk a little bit about, I know one big area is auto remediation. It just takes a really long time for these operation folks to expand and reclaim storage. Also there, like I said before, they're really inundated, the service tickets is, so as the API is the key to making all that happen, being able to plug into these tools.
Absolutely. When you look at some of the capabilities that the advisor provides, you can define your infrastructure policies, for example, for lines of business. You may define policies for your application services. And when you define, when you identify and call out what policies need to be acted on, based on some threshold, then you'll be able to basically react to it without a manual intervention. That means that you can resolve those problems faster. You can trust what the automation does and help customers build confidence on it.
Is it a step-by-step process where first you make recommendations and say, you know, this is what we'd like to resolve. Click okay to continue. And then in the future, once we get confident in that, you can just, as an operations person, that for now on.
We do a value mapping and categorization of our service catalogs and then that helps us to identify what are the candidates for auto remediation. And we work with the customers to configure the application tools and the management tools for auto remediation.
And then I know that visibility is a big factor here. We talk a little bit about how you provide visibility into those capabilities.
So, the UCP Advisor provides federated management capabilities for the infrastructure across the data centers, which means that you don't need to learn new tools. Our ECP is a plugin to a [VMware] vCenter. So ,for VM administrators, they don't need to learn a new tool, which means that they can rely on their experiences and continue to manage the infrastructure for these private cloud needs.
So, before we close this section, take me through the ground examples. I guess one that I see a lot or two that I see a lot are: that a situation where you're running out of capacity; and then the other one would be, you need to add more resources. So, talk a little bit about how that would work with these tools.
Sure. We have customers who have a configured threshold for the capacity: the capacity based on storage or for compute. Then, for application needs you may want to basically say: Hey, if I exceed, say 70% or 80% of my resource needs, then you want to make sure that you provision this infrastructure. In the old way, you need to open a ticket and then manually walk through different organizations to get these results. It takes days or probably weeks, or you need to involve the infrastructure team. You need to involve various organizations, including the change management and release management to get this result. With auto remediation, what helps is that you are defining the policies up front for each of those lines of business at the application services level. And now you're confident: Hey, this has been blessed by every part of the organization. So I can rely on the automations to provide consistent service now.
Yeah, I think that defining things up front, it's almost like creating a plan and actually, not sort of, creating a plan, prior to something going wrong. And I think I find that that executes well. And I think what I like about these types of solutions is, you sort of get rewarded for that planning. A lot of times you do planning and you know, you don't really see the results of that, but these tools really reward you for that planning because now they can execute on it. Right?
Exactly. It's more of a preventive care, but it also frees them to focus on how other projects and initiatives that can support the business needs.
Yeah. Well it makes our job less mundane, right? Because it takes that work out of it. Right? So, let's get in the third section here. Let's talk a little bit about some of the new applications that are being developed in these modern infrastructures. IT teams are under pressure to deliver these apps to the market faster than ever. What can customers do to get ahead of this? And how can Hitachi help?
We have users who are typically our traditional infrastructure administrators, then we also have users who are site reliability engineers in the model DevOps terms who rely on Hitachi infrastructure for deploying modern applications. Which means that we need to clearly understand what the business architecture is. We have users who deploy on bare metal. We have users on virtualized containers, and now serverless, it's all about application density. How can you pack more? So when you look at those architectures, we want to make sure that we "front end" our APIs and capabilities around automation to enable them as part of the CICD pipeline. So it's about not just enabling a container-based workload, but also participating in the CICD pipeline to help the developers, help the testers, help the release management and automate the whole pipeline. So as such, we provide integrations with Terraform, with Ansible. And now with recent Hitachi's acquisition, Hitachi Cloud Accelerator Platform, we want to make sure that we can create these kind of blueprints as part of the CICD pipeline and enabled from dev, test to staging, to production.
One of the things that I am really impressed with is how that integration, like with Ansible works. Talk a little bit about what you guys are doing there and where you're seeing people use the application.
You know, we have customers with different automation maturity. Some are very much familiar with scripts; some are more visual, and so we have a tool like Hitachi Ops Center Automator. Then we also have Ansible scripts. Then you have the other end, the Terraform, where you define the templates where you define your resource needs. And when we had our customers as part of the co-creation, what we identified was that customers at the end of the day do not care how it is enabled from a infrastructure point of view. All they look for is that resources are deployed in a meaningful way that is consistent every time they invoke the API.
Yeah, that's a really good point. I want to see more often I think the not in the modern infrastructure applications versus say the more traditional applications is that in the traditional applications those guys were concerned about storage and they would do things to make sure storage behave well. The modern app guys, they just expect it to work and it's really on the infrastructure team really to make sure that it responds to what they need and they don't. The modern app guys don't take special precautions like the legacy database applications did.
What we have seen is that users who rely on the orchestration like Kubernetes, are going to basically invoke those APIs for additional resource provisioning. So, for IT organization it means that, hey, we don't want to have users go rogue. To avoid and provide a guard rail, we have policies where they can basically provide a sandbox within which they can deploy these kinds of resources on demand, without the need for IT to approve. And then you have this orchestration tool to basically manage those infrastructures for application services.
So, just sort of in wrapping up here, you know, we've talked everything from range from VMware all the way to containers, and this is all really being managed through a consistent API set. So, one of the, my concerns… a lot of times nowadays with data centers is they end up with a worse "silofication" than they had before because they have something for the legacy architecture, something for the virtualized architectures and something for these modern architectures. It feels like you guys are giving them the tool to kind of break some of those silos down. Is that true?
Absolutely. At the end of the day, they are looking for toolsets providing consistent infrastructure services, to back it off with the rich set of APIs. Now you as an IT admin, you don't need to sweat on it because we are enabling role-based access to those APIs, which means that you have a power to control who can access the API, who can invoke it. Based on the policies, they can also consume those resources all through automation.
Okay. So Ravi, before we wrap up here, people that are listening to this that are interested in any special area within the Hitachi website or framework that they should go to get more information.
Go and visit the UCP Advisor landing page. We have tons of innovation and investment that's happening in that area. And then we want to make sure that we can help our users and customers, especially manage this complex infrastructure.
Okay. Well, Ravi, thanks for joining us today on the podcast.
And there you have it. I'm George Crump, Lead Analyst Storage Switzerland. Have a great day.
Enable Policy-Based Migration of Data With NAS System Software
Read this datasheet to see how network attached storage (NAS) system software, included with Hitachi Virtual Storage Platform N series (VSP N series) and Hitachi NAS Platform (HNAS) systems, provides advanced cloud integration and intelligent tiering.
Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us.
If you are already subscribed with us you will not receive any email from us where you need to confirm your data.
"FirstName": "First Name",
"LastName": "Last Name",
"Email": "Business Email",
"Title": "Job Title",
"Company": "Company Name",
"Phone": "Business Telephone",
"LeadCommentsExtended": "Additional Information(optional)",
"LblCustomField1": "What solution area are you wanting to discuss?",
"ApplicationModern": "Application Modernization",
"InfrastructureModern": "Infrastructure Modernization",
"DataModern": "Data Modernization",
"GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.",
"EmailError": "Must be valid email.",
"RequiredFieldError": "This field is required."