Hitachi Data Systems’ Hu Yoshida Predicts the Top Ten Storage Trends of 2011

Hong Kong — January 17, 2011 — Hu Yoshida, Chief Technology Officer, Hitachi Data Systems – a wholly owned subsidiary of Hitachi, Ltd. (NYSE: HIT) – recently announced his top ten storage trends for 2011. The predictions come in wake of a recovering economy where technology will be closely tied to business goals.

Recognizing the importance of a transformed data center, earlier in the year, Hitachi Data Systems launched the industry’s first 3 dimensional scaling platform, enabling organizations to scale up, out and deep for unprecedented levels of agility and cost savings in their virtualized data centers. The launch reinforced Hitachi Data Systems’ commitment to transform data centers into dynamic information centers where access to blocks, files and content is seamless and resides in a fluid and virtualized environment.

Virtualization, dynamic provisioning and cloud feature prominently in Hu Yoshida’s predictions, reaffirming that these technologies will pave the way for an agile data centre that enables companies to quickly and efficiently adapt to changes and take advantage of emerging business opportunities.
Hu Yoshida’s predictions are an annual feature when he studies the industry thoroughly and provides his thoughts on the key trends to watch out for in the coming year. The full texts of the predictions are appended below.

Hu Yoshida’s Storage Predictions for 2011 (Full Text)

1. Storage Virtualization and Dynamic Provisioning acceptance will accelerate as it becomes the foundation for cloud and for dynamic, high availability data centers. Storage virtualization, the virtualization of external storage arrays, will provide the ability to non-disruptively migrate from one array to another and eliminate the costly down time required to refresh storage systems. Dynamic Provisioning enables storage to be provisioned in a matter of minutes, simplifying performance tuning with automatic wide striping, and enabling on demand capacity for an agile storage infrastructure.

2. Closer integration of server and storage virtualization will be required to increase the adoption of data center virtualization. Server virtualization has matured beyond the cost reduction phase of consolidating print, file, test, and development servers and is currently poised to support tier 1 application servers. Moving forward, for support of tier 1 applications, server virtualization will need the integration of enterprise storage virtualization arrays that can offload some of the software I/O bottlenecks like SCSI reserves, and be able to scale to meet the high availability and QoS demands of enterprise tier 1 applications.

3. Virtual tiering will be adopted for data life cycle management. Currently, virtual tiering has the ability to assign a volume to a pool of storage containing multiple performance, cost, tiers of storage and has the intelligence to move parts of that volume to different tiers based on access counts. The user does not need to classify a volume and assign it to a tier of storage, nor move the volume up and down the tiers based on activity. Virtual Tiering, or Dynamic Tiering, will do it automatically without the need to classify the volume and move the entire volume from tier to tier.


4. The time is right for SSD acceptance for higher performance and lower cost in a virtual tiered configuration. Since 80% or more of a volume is usually not active, only a small amount of SSDs need to be in Tier 1 to serve the active parts of a volume while the majority of the volume can reside on lower cost SAS or SATA drives. A multi-tier storage pool which contains a small amount of SSD offset with a large amount of lower cost SAS and SATA drives could cost less than a single pool of SAS drives with the same total capacity and provide 4 to 5 times the IOPs.

5. Serial Attached SCSI (SAS) will be adopted for increased availability and performance in enterprise storage systems. Unlike Fibre Channel (FC) loops which are used to support FC drives on older storage systems, SAS is a point-to-point protocol. FC loops require each drive on the loop to arbitrate for access to the loop which causes contention. If a faster drive -- like an SSD drive -- is connected to the loop, it could drown out the loop so that the other drives could not get access. Since SAS drives are 6 Gbps and most FC loops are 4 Gbps, SAS has a performance advantage with its faster speed and point-to-point access. Since SAS is point-to-point, it is easier to identify a drive failure, as opposed to FC loops which requires a query of each disk on the loop until the bad drive is found. SAS is also compatible with SATA. The only difference has to do with the ports – SAS is dual ported while SATA is single ported. In Hitachi storage arrays, SAS expanders are used as switches for the point-to-point connection. While IBM uses SAS drives in their DS 8800, they connect SAS drives through FC to their controllers. The drive vendors are quickly converting to SAS, for lower cost, performance and reliability.

6. Small Form Factor Drives (SFF), will become prevalent for their power and cooling efficiencies. SFFs are 2.5 inch drives which consume about 6 to 8 watts of power as compared to Large Form Factor (LFF) 3.5 inch drives which consume about 12 to 15 watts. This has a dramatic reduction in power and cooling, with an additional saving of floor space. Several vendors package 24 SFF disks in a drawer that is 2 U high and 33.5 inches wide. Hitachi changed the packaging on the AMS and the Virtual Storage Platform (VSP) so that the packaging is even denser. Instead of a drawer with all the drives mounted in the front, the AMS has a dense drawer with 48 drives that is 3 U high and 24 inches wide. The drawer pulls out for servicing with all 48 drives spinning. On the VSP, we have a disk module with 80 x 3.5 inch drives or 128 x 2.5 inch drives that is 13 U high and 24 inch wide. The disks are serviced from the front or from the back.

7. Cloud will be accepted as a valid infrastructure model. Although some hype will still be associated with “cloud,” there will be enough proof points to prove the concept. On-ramps to the cloud will facilitate the acceptance as well as management tools and orchestration layers that provide the end to end transparency to ensure service level objectives and chargeback.

8. Convergence in the data center will begin to take off. The convergence of server, storage and network infrastructure will make it simpler and faster to deploy applications. The use of server, hypervisor, storage, and network virtualization will be key to providing an open platform to ensure investment protection and customer choice.

9. Increased application transparency into a storage virtualization or cloud infrastructure will be required by applications. Without this transparency, application users will not be able to know if their service level objectives are being met, how to determine chargeback, how to plan their utilization, or the health of their infrastructure. Management software should provide a business unit or application dashboard in which an SLO is defined and persisted across configuration changes. The dashboard should show the status of the SLO, the actual allocation in terms of disk, RAID types, and storage ports, the health of the array groups and host links, and utilization of the allocated capacity over a selectable time frame.


10. Remote managed services will be provided to offload the lower level monitoring, alerting, reporting, and management tasks that are limiting IT operations from moving to new technologies. For the past 10 years, the mandate for IT has been to do more with less and operations staffs are overworked just to maintain more of the same. In order to transform the data center, the IT staff must find the time to train, to plan and execute. A group of IT experts, operating out of a Service Operations Center using remote management tools can leverage their skills across multiple installations at a very reasonable cost and drive higher and quicker return on asset investments.

About Hitachi Data Systems
Hitachi Data Systems provides best-in-class information technologies, services and solutions that deliver compelling customer ROI, unmatched return on assets (ROA) and demonstrable business impact. With a vision that IT must be virtualized, automated, cloud-ready and sustainable, Hitachi Data Systems offers solutions that improve IT costs and agility. With more than 4,200 employees worldwide, Hitachi Data Systems does business in more than 100 countries and regions. Hitachi Data Systems products, services and solutions are trusted by the world’s leading enterprises, including more than 70 percent of the Fortune 100 and more than 80 percent of the Fortune Global 100. Hitachi Data Systems believes that data drives our world – and information is the new currency. To learn more, visit: http://www.hds.com.

About Hitachi, Ltd.
Hitachi, Ltd., (NYSE: HIT / TSE: 6501), headquartered in Tokyo, Japan, is a leading global electronics company with approximately 360,000 employees worldwide. Fiscal 2009 (ended March 31, 2010) consolidated revenues totaled 8,968 billion yen ($96.4 billion). Hitachi will focus more than ever on the Social Innovation Business,which includes information and telecommunication systems, power systems, environmental, industrial and transportation systems, and social and urban systems, as well as the sophisticated materials and key devices that support them. For more information on Hitachi, please visit the company's website at http://www.hitachi.com.

You’re in the Right Place!

Hitachi Data Systems, Pentaho and Hitachi Insight Group are now one company: Hitachi Vantara.

Get more data-driven solutions and innovation from the partner you can trust.