Host Clusters Are Not the Whole Answer
Most people are familiar with the concept of a Hyper-V cluster. This is where a number of hosts, attached to some shared storage, run highly available virtual machines. If a host has an unexpected failure, then the virtual machines on that host stop running. The cluster will automatically failover the virtual machines to another host(s) and start them up. This is great because it minimizes downtime to services to less than a few minutes. But for some businesses, even a few seconds of downtime is bad.
Often forgotten is the fact that virtual machines are running operating systems (Windows or Linux) too. Guest operating systems can crash, and they need reboots after maintenance. All of which also leads to service downtime. The same is true of the services that are running in these VMs. There is nothing that any clustered hypervisor can do about this for real mission critical systems. That is why we need to extend high availability (HA) into the guest operating system and/or application:
- Load balancing: Some applications can be load balanced across multiple virtual machines with no shared storage.
- Guest clustering: Some services require shared storage and will require the active/passive failover functionality of a cluster. You can create a cluster with up to 64 virtual machines.
It makes sense to keep these VMs on different hosts – you don’t want both your load-balanced or clustered VMs to be on the same host. We can use Failover Clustering anti-affinity to keep “paired” virtual machines on different hosts. System Center Virtual Machine Manager makes this really easy by using the Availability Sets feature (this shows up as a checkbox for the service tier in Service Templates).
Networking the Guest Cluster
You will require typical networks that are used by a traditional physical cluster:
- Guest access: One or more networks are required depending on the service provided.
- Heartbeat: At least one (and maybe two) “private” heartbeat networks. Do not use a private virtual switch because it cannot span hosts.
- Storage: You might require networks to connect to the guest cluster’s shared storage (SMB 3.0 or iSCSI).
Shared Storage in Windows Server 2012 Hyper-V
The guest cluster will require some form of shared storage. There will be a need for a witness disk (for clusters with an even number of VMs) and at least one disk for the shared data. In Windows Server 2012 Hyper-V the options are as follows.
- SMB 3.0: This is Microsoft’s preferred storage protocol. You can use WS2012 or later file shares as the guest cluster’s shared storage. Virtual machines cannot use Receive Side Scaling so SMB Multichannel cannot make use of large physical network connections. However, SMB Multichannel can use multiple virtual network cards.
- Virtual Fiber Channel: You can virtualize NPIV-capable host bus adapters (HBAs) in the host and add up to four virtual HBAs in a virtual machine. The SAN manufacturer’s MPIO can be installed and configured in the guest OS. This will allow virtual machines to leverage an investment in an expensive fiber channel SAN.
- iSCSI: With some engineering on the host and physical network, you can add virtual NICs to virtual machines to be used exclusively for communicating with an iSCSI SAN. You must ensure that any design complies with the SAN manufacturer’s support statements.
Each of these designs has positives and negatives. However, there is a major concern: The virtualization layer (guest OS) is now directly accessing the physical layer (storage network and LUNs). This complicates designs, requires manual engineering to implement a guest cluster (contradicting a cloud’s requirement for self-service), and raises security concerns in a multi-tenant cloud. As a result, many hosting companies will refuse to offer guest clusters as a deployment option.
Note that the shared storage must be accessible by the virtual machine from every host in the host cluster. That is because Hyper-V guest clusters support Live Migration with all of the above storage solutions.
Guest Clusters in Windows Server 2012 R2 Hyper-V
Windows Server 2012 R2 Hyper-V will support the aforementioned storage solutions for a guest cluster, but a new 100% virtualized option is possible: WS2012 R2 Hyper-V (and therefore Hyper-V Server 2012 R2) is introducing Shared VHDX. This means that you can deploy a guest cluster where each VM in the cluster is connected to two (witness + data) or more VHDX files. These VHDX files reside on the same SMB 3.0 shares or CSVs as the virtual machines’ files. And that means that there is no physical engineering required to deploy a guest cluster, making guest clusters a realistic option in a true cloud with self-service – great news for hosting companies and their customers.
System Center 2012 R2 – Virtual Machine Manager will support deploying guest clusters with shared VHDX files. With this support, tenants in a cloud will be able to deploy clusters more easily than ever before – without any IT involvement.
In summary, implementing HA at the host level does not create highly available services. This requires designing your applications for load balancing or creating guest clusters. Virtual machines can be deployed across your hosts to create guest clusters, and this will make the services highly available. You can use traditional block storage or file storage in Windows Server 2012 Hyper-V, but this complicates storage engineering. Windows Server 2012 R2 Hyper-V is adding Shared VHDX functionality, which will simplify guest cluster deployment, making it very cloud friendly.
No comments:
Post a Comment