The ability to virtualise a fibre channel adapter in WS2012 Hyper-V. This synthetic fibre channel adapter allows a virtual machine to directly connect to a LUN in a fibre channel SAN.
It is one thing to make a virtual machine highly available. That protects it against hardware failure or host maintenance. But what about the operating system or software in the VM? What if they fail or require patching/upgrades? With a guest cluster, you can move the application workload to another VM. This requires connectivity to shared storage. Windows 2008 R2 clusters, for example, require SAS, fibre channel, or iSCSI attached shared storage. SAS is right for connecting VMs to storage. iSCSI consumers were OK. But those who made the huge investment in fibre channel were left in the cold, sometimes having to implement an iSCSI gateway to their FC storage. Woudn't it be nice to allow them to use their FC HBAs in the host to create guest clusters?
Another example is where we want to provision really large LUNs to a VM. As I just posted a little while ago, VHDX expands out to 64 TB so really we would need to have a requirement for LUNs beyond 64 TB to justify this reason to provide physical LUNs to a VM and limit mobility. But I guess with the expanded scalability of VMs, big workloads like OLTP can be virtualised on Windows 8 Hyper-V and they require big disk.
Virtual Fibre Channel allows you to virtualise the HBA in a Windows 8 Hyper-V host, have a virtual fibre channel in the VM with it's own WWN (actually, 2 to be precise) and connect the VM directly to LUNs in a FC SAN.
Windows Server 2012 Hyper-V Virtual Fibre Channel is not intended or supported to do boot from SAN.
The VM will share bandwidth on the host's HBA, unless I guess you spend extra on additional HBAs, and cross the SAN to connect to the controllers in the FC storage solution.
The SAN must support NPIV (N_Port ID Virtualization). Each VM can have up to 4 virtual HBAs. Each HBA has it's own identification on the SAN.
How It Works
You create a virtual SAN on the host (parent partition), for each HBA on the host that will be virtualised for VM connectivity to the SAN. This is a 1-1 binding between virtual SAN and physical HBA, similar to the old model of virtual network and physical NIC. You then create virtual HBAs in your VMs and connect them to virtual SANs.
And that's where things can get interesting. When you get into the FC world, you want fault tolerance with MPIO. A mistake people will make is that they will create two virtual HBAs and put them both on the same virtual network, and therefore on a single FC path on a single HBA. If that single cable breaks, or that physical HBA port fails, then the VM has pointless MPIO because both virtual HBAs are on the same physical connection.
The correct approach for fault tolerance will be:
2 or more HBA connections in the host
1 virtual SAN for each HBA connection in the host.
1 virtual HBA in each VM, with each one connected to a different virtual SAN
MPIO configured in the VM's guest OS. In fact, you can (and should) use your storage vendor's MPIO/DSM software in the VM's guest OS.
Now you have true SAN path fault tolerance at the physical, host, and virtual levels.
Live Migration
One of the key themes of Hyper-V is "no new features that prevent Live Migration". So how does a VM that is connected to a FC SAN move from one host to another without breaking the IO stream from VM to storage?
There's a little bit of trickery involved here. Each virtual HBA in your VM must have 2 WWNs (either automatically created or manually defined), not just one. And here's why. There is a very brief period where a VM exists on two hosts during live migration. It is running on HostA and waiting to start on HostB. The switchover process is that the VM is paused on A and started on B. With FC, we need to ensure that the VM is able to connect and process IO.
So in this below example, the VM is connecting to storage using WWN A. During Live Migration the new instance of the VM on the destination host is set up with WWN B. When LM un-pauses on the destination host, the VM can instantly connect to the LUN and continue IO uninterrupted. Each subsequent LM, either to the original host or any other host, will cause the VM to alternate between WWN A and WWN B. That' holds true of each virtual HBA in the VM. You can have up to 64 hosts in your Hyper-V cluster, but each virtual fibre channel adapter will alternate between just 2 WWNs.
Alternating WWN addresses during a live migration
What you need to take from this is that your VM's LUNs need to be masked or zoned for two WWNs for every VM.
Technical Requirements and Limits
Fist and foremost, you must have a FC SAN that supports NPIV. Your host must run Windows Server 2012. The host must have a FC HBA with a driver that supports Hyper-V and NPIV. You cannot use virtual fibre channel adapters to boot VMs from the SAN; they are for data LUNs only. The only supported guest operating systems for virtual fibre channel at this point are Windows Server 2008, Windows Server 2008 R2, and Windows Server 2102.
This is a list of the HBAs that have support built into the Windows Server 2012 Beta:
Vendor Model
Brocade BR415 / BR815
Brocade BR425 / BR825
Brocade BR804
Brocade BR1860-1p / BR1860-2p
Emulex LPe16000 / LPe16002
Emulex LPe12000 / LPe12002 / LPe12004 / LPe1250
Emulex LPe11000 / LPe11002 / LPe11004 / LPe1150 / LPe111
QLogic Qxx25xx Fibre Channel HBAs
No comments:
Post a Comment