I am supporting a large customer transformation to virtual, thin and tiered storage. The projections that we made months ago about improving utilization has come true; we are forecasting a net reclamation of about 1.5PB of storage through these transformation investments. The older/existing arrays were simply virtualized, and then afterward the volumes were re-presented as thinned volumes. The good news is they have 1.5 PB of reclaimed space. The bad news… they have 1.5 PB of capacity that is still too new to de-commission.

One point of view is to keep the capacity in-house, and use that for organic growth over the next (estimated 15-18) months. This will create a CAPEX holiday for them, but since the price of disk is only about 1/5 the TCO of disk, this strategy does not immediately help reduce the unit cost of disk. As some of the systems begin to be used/allocated (they will be 3-4 years old by then) will not have the same environmental efficiencies as arrays that are purchased in the future, when they need them. These assets will be powered, cooled, and burning maintenance or warranty time while they sit, idly waiting to be used.

The other option is to write-off the assets, sell them in the after-market space, and consume capacity and IT infrastructure that is “closer to the bone” in terms of efficiency and effectiveness. This client may also have another division that could use the assets, and then there would be a cost to transport and re-purpose the systems.

We typically see clients of this size with a range of older and newer storage assets, so any reclamation activity results in decommissioning the older systems first to save on power, cooling, maintenance and migration. This situation was unique, since they had made a very large purchase (another vendor) less that 2 years ago, and then made the transition to Hitachi virtualized storage. You cannot go on a witch hunt to find those at fault. It is water under the bridge, but it does highlight the requirement to have a multi-year strategy for storage and other high-growth infrastructure. Many of our customers tend to believe that the price erosion of disk will satisfy budget constraints, but as mentioned earlier the price of the array is becoming a smaller fraction of the total cost of the array. Therefore, multi-year future plans are needed to budget and schedule key investments to achieve continuous improvement:

  • Storage virtualization, with unified management
  • Over-provisioning
  • Virtualization-assisted migration
  • Tiering, both in the frame and external to the virtualized subordinate arrays
  • SSD inclusion to the tiering mix
  • Compression, de-dupe
  • Unified block and file storage architectures

For the customer situation above, we are starting a TCO baseline and 12, 24 month TCO projection to determine which path will provide the lowest cost now and in the next few years. From the TCO models we can then calculate the present value of each option. Economic methods and some simple finance models will help set the right plan to move forward.

David Merrill is the Chief Economist at Hitachi Data Systems. This post was originally published at The Storage Economist.