All Blogs

From Data Lake To Enterprise Data Platform: The Business Case Has Never Been More Compelling

Tim Langley-Hawthorne Tim Langley-Hawthorne
Chief Information Officer, Hitachi Vantara

May 25, 2021

Companies have had only mixed results in their decades-long quest to make better decisions by harnessing enterprise data. But as a new generation of technologies make it easier than ever to unlock the value of business information, change is coming.

We’ve already reaped gains at Hitachi Vantara, where I run a global IT team that supports 11,000 employees and helps more than 10,000 customers rapidly scale digital businesses. We invest ~$5M per year on our enterprise data platform – and in less than three years, we’ve returned multiples of that amount in cost savings and new revenue to the organization.

How did we do it? Through a combination of smart changes in technology, culture and process.

From Data Lake To Data Platform

 In 2006, when Clive Humby, the British mathematician used the phrase “data is the new oil,”  he also rightly noted that data, as with oil, remains raw crude if it’s not refined. And so for years, technologists have been building the IT equivalent of drills and rigs and storage facilities to tap and refine the huge reservoirs of data that often are submerged in siloes throughout most large enterprises.

To put this into perspective, we have seen consistent reports over the years that data workers are continuously spending about half of their time searching for and preparing data instead of gaining insights that can help the business.

At Hitachi Vantara, building an enterprise data platform (EDP) marked a big step forward in our technological evolution.

An EDP is a full suite of organizational data, enterprise data tools and services – ingestion, processing, automation, security & virtualization, governance, machine learning and artificial intelligence. Our EDP includes 30-plus data sources and 50-plus terabytes of data. What’s more, it’s leveraged by sales, marketing, supply chain, services and many other teams throughout the company. And we are just at the beginning of our journey.

From a technology perspective, it’s easier than ever to create an EDP. Many of the major cloud providers have starter kits. Our platform integrates various technologies and vendors throughout the stack, from the major cloud providers to best-of-breed vendors and our own technologies.

But “easier than ever” does not equal effortless. Here are some of the more common challenges we encountered, together with mitigation strategies.

User aversion. For most large enterprises, the toughest obstacles are psychological and cultural, particularly when it comes to figuring out how to pry people away from their silos. Many are inclined to hoard their data in their own SQL servers and spreadsheets because it gives them a sense of control (and sometimes, job security). That’s why company leadership – not just IT leadership – needs to emphasize the real power and value of bringing together enterprise data. You need clear, top-down directives that define this as a corporate priority. But top-down directives alone typically won’t suffice. That’s why it’s so important to work with teams starting at the grass roots level.

Pain points, use cases and ongoing dialogue. Trailblazers and early adopters exist in every large enterprise. We were fortunate to connect with several business partners who shared a burning desire to get value from their data. Many already had specific use cases in mind; others wanted to collaborate to pinpoint those use cases. Either way, it’s critical to start with a business problem. Once that’s fleshed out, how does the business group need information presented? How often do they need it? What are the data sources? Then on an ongoing basis, it’s important to have regular check-ins to ask, how are we doing and what can we do better?

Data quality before ingestion. Schema enforcement is critical to preventing mismatches between data sources and targets. For that reason, it’s important to build a framework and automated alerting system for schema enforcement and data quality before you begin the work of data ingestion. Without this, you risk data being unusable by your business partners. The right framework and alerting system will ensure higher data quality and allow you to quickly address problems if and when they arise.

By taking these steps upfront, we were able to drive real ROI with several of our line of business partners, including:

  • Marketing/Sales. With a critical product launch coming up fast, our marketing leadership approached us with a need to analyze customer data to help maximize sales. By connecting certain marketing interactions (e.g., downloads, conferences, emails) with sales information, we were able to predict future sales opportunities, optimize the marketing mix and increase up-sell and cross-sell. The result: $27M of incremental sales opportunities.
  • Sales. A smaller but no less compelling service for sales involved building a new automated document translation function for customer proposals. Now, rather than go through an external, costly translation service, sales can use the EDP to translate proposals into various languages. They’ve been astonished by the accuracy. ROI: $3M in annual savings thus far, with the potential for much more as we look at rolling this out to legal and other functions with translation needs.
  • Pricing. The Hitachi Vantara pricing team was after margin improvements. By connecting and analyzing pricing information against competitive pricing and sales histories, we were able to create a pricing optimization tool that has delivered an estimated $32M in incremental revenue. In effect, we are now able to analyze disparate data to provide sales with a recommended “ideal price” to win a deal, mitigating the kind of “do anything to win” maneuvers that can often drive deep discounting that leaves money on the table.
  • Renewals/Services. Our renewal and services teams became a case study that is as simple as it is compelling. By connecting SalesForce data and ERP data, we were able to help them find $11M in additional sales opportunities.

Across the various business units within Hitachi Vantara, we’ve been able to improve organizational trust in our data. We’ve enabled self-service business intelligence and reporting, where business users can use new tools such as Power BI or Tableau, for example, to produce smart reports quickly. We’ve reduced complexity by decommissioning duplicative or siloed data marts by way of consolidation. And in migrating so much distributed, on-prem data onto the cloud, we’re running more efficiently and at a lower cost.

Yes, drilling for and refining enterprise data can at times felt every bit as complex and arduous as modern petroleum production. But with a corporate-wide commitment to change and investments in modern enterprise data platforms, companies can finally deliver actionable business intelligence and tangible returns that simply can’t be realized by siloed groups conducting skunkworks BI projects throughout the organization.

Join Tim Langley-Hawthorne at the virtual Hitachi Social Innovation Forum technical session, Hitachi on Hitachi: Finding Data’s Value in Our Enterprise Data Lake, to discover how Hitachi Vantara channeled the power in its data through a combination of smart changes in technology, culture, and process. Register here.

Tim Langley-Hawthorne is SVP, Chief Information Officer at Hitachi Vantara.

Tim Langley-Hawthorne

Tim Langley-Hawthorne

Tim is responsible for global IT supporting Hitachi Vantara employees and portfolio of edge-to-core-to-cloud digital infrastructure and solutions. His background includes deep cross-functional experience in customer service, strategic sourcing and financial management.