ThinkstockPhotos-479189577It seems like everything is software-defined these days. You have software-defined networks, software-defined storage, and software-defined data centers.

When did hardware get such a bad rap? After all, no matter what magic your software can do, it still requires hardware in order to run. I realize that what is really meant by software-defined is that you can just use COTS (commodity off-the-shelf) hardware and put the software your developers have agonized over on top of it and BAM, you have something incredible. But let’s cut through the hype and dig into it by looking at a few examples.

The biggest newsmakers in the software-defined storage space fit into two categories: Converged infrastructure or CI (like FlexPod from NetApp and Cisco, VCE Vblock, or HP Converged Systems), and hyper-converged infrastructure or HCI (like Nutanix, VMware VSAN, SympliVity, and most recently Cisco and EMC).

What’s the main difference between CI and HCI? It boils down to hardware. Let’s compare FlexPod with VMware’s VSAN. A FlexPod system is based on a reference architecture: You choose Cisco Nexus switches for the networking, Cisco UCS blades or rack-mount servers for compute, and then add NetApp AFF or FAS (and in some cases E-series and maybe in the future SolidFire, too) as the storage layer. Put it all in a rack and use Cisco’s validated design deployment guide, and you have a validated, supported and tested converged infrastructure system. Management is handled by the hypervisor’s tools, Cisco’s UCS Director, or a number of other third-party orchestration and automation tools. The main benefits here are flexibility in selecting the appropriate size of hardware to meet your needs, and granular scalability. You can scale compute, network or storage, all independently.

Now, let’s contrast the CI approach with VMware’s VSAN. To get a VSAN solution you purchase COTS components that are VSAN-ready, meaning a qualified list of server components that VMware has validated and tested. Once you have a minimum of three of these servers and the necessary vSphere server licenses and VSAN licenses, you build your VSAN cluster. The networking is taken care of using a software piece – either the built-in distributed switch, or a license from Cisco for the Nexus 1000v software switch, or VMware’s NSX SDN stack. Once you have that, you then ensure you have either all SSDs in the COTS or a mixture of SSDs and HDDs, and you build up your VSAN cluster. If you want to grow the networking side, you simply add nodes to your cluster, making this model simpler and cheaper. One may be able to argue the cheaper side of this by suggesting that after HW and SW licenses it’s about the same cost as the storage array in the CI approach.

Perhaps a real-world perspective may help. Let’s suppose you have a typical data center that is largely virtualized but still has some applications that require UNIX or some other application that can’t be virtualized (like SQL Server clusters). Let’s also assume that most of your unstructured data is served up to your users using the Microsoft SMB protocol. You’ve also invested pretty heavily over the years into Fibre Channel SAN switching. Your core network is due for an upgrade soon. Your current VMware clusters are running ESX 5.1 and the server technology is aging. If you decide to go the HCI route with VSAN, this won’t help your UNIX servers; you could set up some VMs with Windows Server to serve the SMB needs, but that means more licenses and support. Since your core network is due for a refresh, maybe you could simply skip the purchases there since your HCI has networking built in. However, there still needs to be some physical switches/routers for that software-defined networking to actually work outside the cluster, so that won’t work. You could potentially get out of the Fibre Channel business and save some money there, but that UNIX machine needs SAN. Maybe you could just connect your UNIX server to the VSAN? Well, no external hosts work in a VSAN architecture, or any HCI solution for the most part (though Nutanix just announced a Tech Preview to allow SMB/NFS to be served from their HCI platform).

In our data center example, with a FlexPod you can connect your UNIX system to the NetApp storage inside the FlexPod without any real issues. You could also skip having additional Windows Server instances doing the SMB because the NetApp can do NFS, SMB, FCoE, FC and iSCSI, all from the same system. Your core network could be replaced and integrate directly with the physical Nexus switches in your FlexPod. You could use your current SAN switches or phase them out over time if you desire, since the FlexPod can support SAN as well. And if you max out your storage capacity, you simply scale the storage piece – no need to add additional compute or network just because the storage piece grew. At the end of the day, which approach is simpler and more cost effective? You be the judge!

Visit siriuscom.com to learn more about Sirius Computer solutions, or contact us for more information.