Hitachi Virtual Storage Platform (VSP) 5000 series is the culmination of over half a century of innovation in the IT sector. No other vendor is as committed as Hitachi to helping you and your customers.
Hitachi Virtual Storage Platform 5000 series systems lower costs by tailoring payments to your return on invest-ment (ROI) and budget. Our flexible offerings enable you to utilize pay-as-you-grow and storage-as-a-service (EverFlex) options, where all upgrades are included within the contract.
Today you must balance the need to support todays appliactions with the want to be future-ready for tomorrows appliactions. Hitachi can help by supporting the intermix of technology and support open, mainframe and containerized workloads you can consolidate more applications and simplify your data center then ever before. At the same time you cannot be affected by any down time, which is why today 87% of the Fortune 100 financial institutions rely on Hitachi Vantara VSP storage to offer noninterupted services to their customers.
Consolidating applications generates huge volumes of data. Such volumes must be processed rapidly and drive the need for nonvolatile memory express (NVMe) technology. Colder datasets can be automatically tiered to the cost-effective capacity of hard disk drives (HDD) or migrated to the cloud.
VSP 5000 series future-proofs your organization, offering a mixed NVMe solution with storage class memory (SCM) flash or solid-state disk (SSD), alongside serial-attached SCSI (SAS) SSD and HDD. Giving you the environment that can scale up in capacity but also scale out for performance. This approach gives you the composable data platform for all your workloads.
Take advantage of the advanced capabilities in the VSP 5000 series in all of your existing data center storage assets through virtualization, which Hitachi pioneered. Storage virtualization gives you a single management control point for multiple storage systems, which increases administrative efficiencies. All data services, such as data reduction, automation and metroclustering, which are available with the VSP 5000 series, are extended to virtualized systems to give them more value and an extended life cycle.
There are two models in the VSP 5000 series: The VSP 5200 is a scale-up enterprise storage platform with a dual-controller block supporting open and mainframe workloads. You then have a nondisruptive upgrade path to the VSP 5600, which starts with a single quad controller block and scales out to three blocks as you grow. The VSP 5000 series starts as small as 3.8TB and scales up to 69PB of raw capacity and 33 million IOPS of performance, which allows for massive consolidation of workloads for cost savings. With response times as low as 39 microseconds, your business partners will be delighted by how fast their applications respond.
Our patented Hitachi Accelerated Fabric allows Hitachi Storage Virtualization Operating System RF (SVOS RF) to offload I/O traffic between blocks. It uses an architecture that provides immediate processing power without wait time or interruption to maximize I/O throughput. As a result, your applications suffer no latency increases since access to data is accelerated between nodes even when you scale your system out.
You can place your business data within our solutions, relying on 59 years of Hitachi engineering experience to deliver reliability. The VSP 5000 series, delivers an industry leading 100% data availability based on our engineering experience. Offering a superior range of continuity options, all backed up with the industry’s first and most comprehensive 100% data availability guarantee.
Migrate data from older systems nondisruptively so operations can continue, nonstop. The new scaleout architecture protects against local faults and performance issues with our active-active controller architecture. With global-active device (GAD) we enable full metroclustering between data centers that can be up to 500km apart. Replicate to a third data center using Hitachi Universal Replicator (HUR) software, which offers asynchronous replication, to make use of all your investments.
Your system is proactivly monitored in the cloud 24/7 by Hitachi Remote Ops to predict and prevent downtime. We collect over 40 trillion data points daily to deliver on our reliability. Over 90% of problems are address before you are impacted, minimizing unnecessary troubleshooting.
With the VSP 5000 series, you gain rock-steady hardware, but what about your application’s continuity and recovery? This series is supported by Hitachi Ops Center Protector, which provides applicationaware snapshots, copy data management and instant recovery. You can recover from a data disaster in seconds, not hours!
Security compliance and cyber resiliency is essential, and in the VSP 5000 series Hitachi has taken steps to improve the security of how data is stored and administrated. We have greatly reduced the risk of data falling into unauthorized possession with FIPS 140-2 encryption on our media. The erasure services align with NIST SP 800-88r2 and ISO/IEC 27040:2014. Finally, we have hardened system access to safeguard against illegal access and hacking: The VSP 5000 series uses TLS1.3 for secure communications to stop improper access by other systems on the fabric.
Simplifying the management, provisioning and performance of data platforms can become a demanding never-ending cycle. This is the potential of AI operations, where the VSP 5000 series can take control of repetitive tasks to reduce and even eliminate the need for any human intervention. You are freed to focus on innovation and tactical business efforts.
AI is used to constantly monitor the environment and make sure that resources are performing, based on service level agreements (SLAs). If issues are noted, the AI can predict and prescribe changes to improve the operational efficiency. AI can also be used to simplify complex decision-making, such as predicting when additional storage might be needed or how quality of service (QoS) should be configured.
Automation is a critical aspect of all AI operations. Automation software handles configuration, provisioning and common management tasks instead of humans. Automation is often leveraged at the start of a deployment to ensure resources are set up based on best practices and no steps are missed that could result in data loss. It can also be used in concert with AI to automate infrastructure updates.