All blogs

AI is Reshaping Data Centers – Is it Time to Rethink Storage?

Jason Hardy and Atsushi Ishikawa
Jason Hardy, CTO, Artificial Intelligence & Atsushi Ishikawa, CTO, Network Storage

July 29, 2025

AI is Reshaping Data Centers – Is it Time to Rethink Storage?

As artificial intelligence (AI) reshapes industries, it’s quietly revolutionizing the heart of IT: The data center. The explosive growth of AI workloads is driving up power usage, challenging cooling systems, and demanding a fundamental rethink of how we store and move data.

In this new landscape, flash storage stands out – delivering the performance, efficiency, and scalability that AI needs to truly accelerate. Yet, disk-based storage still plays a crucial role for many organizations, especially where cost or archival needs dominate. This isn’t about a sudden switch from one technology to another. It’s about finding the right mix to support each organization’s unique journey.

The real questions now are: How soon should you start evolving your storage strategy for AI? And how do you chart the best course forward?

Here are a few thoughts to help you get started.

The AI Energy Surge: A Tipping Point for Infrastructure 

AI workloads are power-hungry by design. As models grow in size and complexity, so do their energy demands. The International Energy Agency projects that global data center (DC) electricity use could more than double by 2030, with AI as a primary driver. Some forecasts suggest data centers could consume over 1,000 terawatt-hours annually – that’s more than Japan’s entire current annual electric consumption.

This isn’t just a capacity issue. It’s a sustainability crisis in the making. With power densities 5-10x greater than typical data center applications – exceeding 100 kW  per rack – and power for GPUs reaching 1,500W, traditional infrastructure is being pushed to its limits.

Something must change, and there’s no time to waste.

Storage: Quiet Contributor to the AI/DC Energy Crunch

While compute typically gets the spotlight, storage is a major – and often overlooked – contributor to data center energy use. Traditional hard disk drives (HDDs), still widely deployed, are inefficient by today’s AI power consumption standards.

In contrast, all-flash NVMe solid state drives (SSDs) offer a highly compelling alternative for the following reasons:

  • Idle Power: HDDs consume 5-10W vs. NVMe SSDs at just 0.2-0.8W.
  • Performance per Watt: SSDs deliver up to 50x more IOPS per watt.
  • Density: Flash packs more capacity into less space, reducing both energy and cooling needs.

In one test, results showed a single rack of SSDs could replace the capacity and performance of 23 HDD racks, while delivering 54x more read bandwidth and consuming just a fraction of the power. That’s not just a performance win; it’s a sustainability imperative.

Cooling Innovation: From Optional to Essential

The thermal output of AI infrastructure has accelerated the need for liquid cooling, which now offers superior rack density and heat dissipation compared to traditional air-cooled systems. With the ability to reduce facility power by nearly 20% and total data center power by more than 10%, these innovations are no longer experimental. With this increase in efficiency, liquid cooled data centers are now essential to supporting chips exceeding  500W while keeping data centers within power and carbon budgets.

Reliability and the Total Cost of Storage Ownership

When it comes to data infrastructure, energy efficiency is only one part of the total cost of ownership (TCO) equation. Reliability also plays a critical role in measuring TCO:

Bottom line: Reliability affects not just direct replacement costs but also operational expenses such as monitoring, maintenance, and performance consistency, all of which factor into the reduced TCO of lower energy consumption SSDs.

Aligning ESG and AI Readiness Goals

The shift to all-flash storage aligns with a variety of broader strategic enterprise priorities:

  • Sustainability: Lower power use, smaller footprints, and reduced e-waste.
  • AI Performance: NVMe’s low latency and high performance are ideal for AI pipelines.
  • Energy Budgeting: Reduced energy consumption from storage can be reallocated to compute – where it’s needed most.

So, the benefits enjoyed at an operational level also contribute to delivering the company’s corporate environmental, sustainability, and governance objectives. That’s a win-win all around.

Is It Time to Start Sunsetting Disk-Based Storage?

Every organization has unique needs based on specific technology journeys, including those who may still need to rely on a form of disk-based storage. And while HDDs still offer a cost-per-TB advantage for cold storage, its inefficiency makes it increasingly difficult to justify supporting AI-driven environments.

The 5,000%  performance-per-watt advantage of flash, combined with its physical/spatial and operational benefits, makes a strong case for accelerating the transition, especially those leaning into AI across the enterprise.

What’s Next: Opportunities for Innovation and Insight

As we consider when and how to begin navigating this shift, several questions remain ripe for exploration:

  • Time-to-ROI: How quickly do flash conversions pay off when factoring in energy, cooling and space savings?
  • AI-Specific Storage Patterns: How can we optimize storage architectures for AI workloads that are increasingly dynamic, data-intensive, and latency-sensitive?
  • Lifecycle Impact: What’s the full environmental cost of flash vs. HDD from cradle to grave, including manufacturing, operations, and disposal?
  • Hybrid Strategies: Where does HDD still make sense, and how do we intelligently balance it with flash to maximize both performance and cost?
  • Power-Aware Software: Can intelligent storage management and AI-driven orchestration further reduce energy use and carbon footprint?

As you and your team consider these questions, it is essential to keep in mind that AI is not just transforming what data centers do. It’s transforming what they need to become. The shift to all-flash storage isn’t just a performance upgrade. It’s a strategic pivot toward a more sustainable, resilient, AI-ready infrastructure.

As workloads evolve and environmental pressures mount, the data center of the future must be leaner, smarter, and greener. Embracing flash is a key step in that evolution. Not just for speed, but for stewardship.

It’s a journey to a digital ecosystem where performance and sustainability are no longer at odds but inextricably linked.

No Two Journeys are Alike

As we said at the start, every customer is at a different stage of their AI journey – some are already building, while others are just beginning to map out what’s possible. But regardless of where you are, one thing is clear: Modernizing your data center and building a strong, flash-enabled data foundation is key to unlocking AI’s potential.

That’s why it’s important to have a partner who sees the bigger picture. At Hitachi Vantara, we understand that successful AI isn’t just about storage – it’s about orchestrating the right mix of compute, networking, data management, and operational excellence. We bring deep experience across both traditional and next-generation data center environments, and we work side-by-side with our customers to design, optimize, and scale infrastructures that deliver lasting value.

Wherever you are on your journey – whether you’re focused on performance, efficiency, sustainability, or cost – we’re here to help you build an environment that accelerates, not limits, your AI ambitions. Let’s build the future of AI together, starting with the foundation your data and your business deserve.

Read More:


Atsushi “Archy” Ishikawa

Atsushi Ishikawa

Atsushi Ishikawa is Chief Technology Officer for Network Storage at Hitachi Vantara.


Jason Hardy

Jason Hardy

Jason Hardy is Chief Technology Officer for Artificial Intelligence at Hitachi Vantara