25 Mar 2014

Top Windows Server 2012 R2 Hyper-V Virtualization Features

Windows Server 2012 R2 brings with it a host of new virtualization features, as well as improvements to existing features and capabilities. Refer to our 'What's New in Windows Server 2012 R2' article for a more general overview, but read on for a list of some of the top new virtualization features found in the R2 release.

1. Hybrid Cloud

Windows Azure Infrastructure-as-a-Service (IaaS) is built on the same hypervisor as Windows Server. This means that there is complete virtual machine compatibility between the private cloud, partner public clouds, and the Microsoft-owned public cloud. Customers now have to ask themselves: “Where do I want my service to run today?”

2. Compressed Live Migration

A compression engine is built into Live Migration in Windows Server 2012 R2 Hyper-V. The processor in hosts is often underused, so this engine makes use of this spare resource to compress the memory of virtual machines that are being moved before the memory pages are copied across the Live Migration network. Hyper-V will monitor the utilization of CPU resources on the host and throttle compression to prioritize guest services. Enabling Live Migration compression on networks with 10 Gbps or less without Remote Direct Memory Access (RDMA/SMB Direct) support will greatly reduce the time it takes to move virtual machines (not including storage migration).

3. SMB Direct Live Migration

Live Migration can be configured to leverage SMB Direct (Remote Direct Memory Access, or RDMA) on hosts that that NICs with support for this feature. This feature will provide hardware offloaded accelerated copy of memory pages using SMB 3.0 NICs. This can take advantage of SMB Multichannel to span multiple networks. SMB Direct Live Migration provides the fastest way to live migrate virtual machines (not including storage) from one host to another.
A crazy fact: Memory speed will be the bottleneck on a host with PCI3 support and three RDMA NICs for Live Migration!

This feature allows very interesting new architectures, especially where organizations have decided to deploy SMB 3.0 storage with support for SMB Direct. Investments in RDMA can be leveraged to move virtual machines very rapidly over these physical networks (with QoS applied for SLA). For example, Cluster Aware Updating (CAU) will be performed much more rapidly.

4. Live Resizing of VHDX

Virtual hard disks of the VHDX format that are attached to the SCSI controllers of virtual machines can be resized without shutting down the virtual machine. VHDX files can be up- and down-sized. Downsizing can only occur if there is unpartitioned space within the VHDX. This feature supports both Windows and Linux guests.

Live Resizing of VHDX files will be of huge value to those running mission critical workloads. It will also offer a new self-service elasticity feature for clouds.

5. Storage Quality of Service (QoS)

New storage metrics for IOPS have been added to WS2012 R2. With these metrics, you can determine the IOPS requirements of virtual machines and put caps on storage activity. This will limit how much physical disk activity that virtual machines can create, and therefore limit the damage that activity spikes can cause to other virtual machines and their guest services.

One of the concerns with shared storage is the possibility of a race for storage throughput. Enabling Storage QoS will limit the damage that any virtual machine or tenant can do in a cloud.

6. Live Virtual Machine Cloning

WS2012 R2 Hyper-V allows you to clone a running virtual machine. This will create an exact copy of the virtual machine that is stored in a saved state. This feature supports GenerationID. That means you can use Live Virtual Machine Cloning to create Active Directory supported clones of a virtual domain controller that is not the PDC Emulator.

This feature will be useful for situations where you need to debug a production system or you want to perform tests, such as guest OS upgrades.

7. Virtual Machine Export Improvements

You can export a virtual machine with a checkpoint (formerly known as a snapshot) and you can export a checkpoint of a virtual machine.

8. Linux Guest OS Support Enhancements

Dynamic Memory will be supported in Linux Guest OS’s on Windows Server 2012 R2 Hyper-V. This will give much better memory optimization for Linux virtual machines, and it'll allow for much greater densities. Linux distributions with this built-in Linux Integration Services for Hyper-V HyH support are already available.

There will be support for online backup of Linux guest OSs. This is not Volume Shadow Copy Service (VSS) for Linux, and it does not give an application consistent backup. Instead, a file system consistent backup is created by freezing the file system. This feature does require an upgrade of any already deploy Linux Integration Services.

9. Shared VHDX

You can configure up to 64 virtual machines to share a single VHDX file on some shared storage (such as CSV or SMB 3.0). The VM sees the shared VHDX as a shared SAS disk with SCSI-3 persistent reservations. This is for data volumes to create guest clusters, and not for shared boot volumes. It works with down-level guest OSs, such as W2008 R2 with the WS2012 R2 Hyper-V Integration Components installed. This feature is supported by Service Templates in VMM 2012 R2.
This will drastically simplify guest clustering, where virtual machines are used to create a highly available service at the application layer. This could eliminate the need for guest attachment to physical LUNs and will be accommodating to self-service deployment within a cloud.

10. Hyper-V Replica Improvements

The default period for asynchronous replication of the Hyper-V Replica Log is every 5 minutes, but this can be changed to every 30 seconds or every 15 minutes. This allows companies to choose the allowed recovery point objective (RPO) – the maximum allowed amount of data loss in time.

Hyper-V Replica can now be extended to a third site. This is an A-B-C extension, and not an A-B/A-C extension. For example, a company might replicate virtual machines from the primary site to a local secondary site. This might be configured to hdappen every 30 seconds. Replica virtual machines in the secondary site might be replicated to a distant third site (such as a hosting company) maybe every 15 minutes. In the event of an unplanned failover, this would give an RPO of 30 seconds in the secondary site and an RPO of 15 minutes and 30 seconds in the third site.

The performance and scalability of Hyper-V Replica has been improved. Maintaining historical copies of virtual machines in the secondary site is costly (IOPS). This has been reduced, so maintaining historical copies of your replica VMs will not punish your storage in the secondary site.

11. VM Connect

The crippled virtual machine connection of the past is being replaced by a Remote Desktop experience that is built into the virtualization stack. This has no dependency on the virtual machine’s networking. By default, this feature is disabled in WS2012 R2 Hyper-V and enabled in Windows 8.1 Client Hyper-V.
Things that Remove Desktop VM Connect allow you to do include:
· Copy & paste text/images.

· Copy files to/from the client desktop.

· Do session-based USB redirection. This means you might use a USB stick to copy files. It is not a USB dongle solution.

12. Cross-Version Shared-Nothing Live Migration

You can live migrate a virtual machine from WS2012 Hyper-V to WS2012 R2 Hyper-V. This could eliminate downtime when deploying WS2012 R2 Hyper-V. For example, you can deploy a WS2012 R2 Hyper-V host/cluster alongside an existing WS2012 Hyper-V host/cluster. You can then live migrate the virtual machines from the older platform to the new platform with zero downtime to the availability of the services provided by the virtual machines.
Note that you cannot do a Live Migration from WS2012 R2 Hyper-V to WS2012 Hyper-V. It is a one-way upgrade path.

13. System Center Release Alignment

This is the first time that Windows Server (and thus Hyper-V) is being developed with and released at or close to the same time as System Center. There has been closer than ever before cooperation within Windows Server and System Center (WSSC). That means you can deploy System Center immediately and follow it up with WS2012 R2 Hyper-V, without the long delays of the past.


Create Guest Clusters in Windows Server 2012 Hyper-V

How you can create a guest cluster using virtual machines with Windows Server 2012 Hyper-V. We'll also find out what Microsoft is doing with Windows Server 2012 R2 Hyper-V to make guest clustering easier.

Host Clusters Are Not the Whole Answer

Most people are familiar with the concept of a Hyper-V cluster. This is where a number of hosts, attached to some shared storage, run highly available virtual machines. If a host has an unexpected failure, then the virtual machines on that host stop running. The cluster will automatically failover the virtual machines to another host(s) and start them up. This is great because it minimizes downtime to services to less than a few minutes. But for some businesses, even a few seconds of downtime is bad.

Often forgotten is the fact that virtual machines are running operating systems (Windows or Linux) too. Guest operating systems can crash, and they need reboots after maintenance. All of which also leads to service downtime. The same is true of the services that are running in these VMs. There is nothing that any clustered hypervisor can do about this for real mission critical systems. That is why we need to extend high availability (HA) into the guest operating system and/or application:

  • Load balancing: Some applications can be load balanced across multiple virtual machines with no shared storage.
  • Guest clustering: Some services require shared storage and will require the active/passive failover functionality of a cluster. You can create a cluster with up to 64 virtual machines.

It makes sense to keep these VMs on different hosts – you don’t want both your load-balanced or clustered VMs to be on the same host. We can use Failover Clustering anti-affinity to keep “paired” virtual machines on different hosts. System Center Virtual Machine Manager makes this really easy by using the Availability Sets feature (this shows up as a checkbox for the service tier in Service Templates).

Networking the Guest Cluster

You will require typical networks that are used by a traditional physical cluster:

  • Guest access: One or more networks are required depending on the service provided.
  • Heartbeat: At least one (and maybe two) “private” heartbeat networks. Do not use a private virtual switch because it cannot span hosts.
  • Storage: You might require networks to connect to the guest cluster’s shared storage (SMB 3.0 or iSCSI).

Shared Storage in Windows Server 2012 Hyper-V

The guest cluster will require some form of shared storage. There will be a need for a witness disk (for clusters with an even number of VMs) and at least one disk for the shared data. In Windows Server 2012 Hyper-V the options are as follows.

  • SMB 3.0: This is Microsoft’s preferred storage protocol. You can use WS2012 or later file shares as the guest cluster’s shared storage. Virtual machines cannot use Receive Side Scaling so SMB Multichannel cannot make use of large physical network connections. However, SMB Multichannel can use multiple virtual network cards.
  • Virtual Fiber Channel: You can virtualize NPIV-capable host bus adapters (HBAs) in the host and add up to four virtual HBAs in a virtual machine. The SAN manufacturer’s MPIO can be installed and configured in the guest OS. This will allow virtual machines to leverage an investment in an expensive fiber channel SAN.
  • iSCSI: With some engineering on the host and physical network, you can add virtual NICs to virtual machines to be used exclusively for communicating with an iSCSI SAN. You must ensure that any design complies with the SAN manufacturer’s support statements.

Each of these designs has positives and negatives. However, there is a major concern: The virtualization layer (guest OS) is now directly accessing the physical layer (storage network and LUNs). This complicates designs, requires manual engineering to implement a guest cluster (contradicting a cloud’s requirement for self-service), and raises security concerns in a multi-tenant cloud. As a result, many hosting companies will refuse to offer guest clusters as a deployment option.

Note that the shared storage must be accessible by the virtual machine from every host in the host cluster. That is because Hyper-V guest clusters support Live Migration with all of the above storage solutions.

Guest Clusters in Windows Server 2012 R2 Hyper-V

Windows Server 2012 R2 Hyper-V will support the aforementioned storage solutions for a guest cluster, but a new 100% virtualized option is possible: WS2012 R2 Hyper-V (and therefore Hyper-V Server 2012 R2) is introducing Shared VHDX. This means that you can deploy a guest cluster where each VM in the cluster is connected to two (witness + data) or more VHDX files. These VHDX files reside on the same SMB 3.0 shares or CSVs as the virtual machines’ files. And that means that there is no physical engineering required to deploy a guest cluster, making guest clusters a realistic option in a true cloud with self-service – great news for hosting companies and their customers.

SharedVHDX

System Center 2012 R2 – Virtual Machine Manager will support deploying guest clusters with shared VHDX files. With this support, tenants in a cloud will be able to deploy clusters more easily than ever before – without any IT involvement.

In summary, implementing HA at the host level does not create highly available services. This requires designing your applications for load balancing or creating guest clusters. Virtual machines can be deployed across your hosts to create guest clusters, and this will make the services highly available. You can use traditional block storage or file storage in Windows Server 2012 Hyper-V, but this complicates storage engineering. Windows Server 2012 R2 Hyper-V is adding Shared VHDX functionality, which will simplify guest cluster deployment, making it very cloud friendly.

Maximize Hyper-V Live Migration with 10GbE Network Bandwidth

Larger virtual machines (up to 1 TB RAM in WS2012 Hyper-V) and denser hosts (up to 4 TB RAM) means that there can be a lot of memory for live migration to copy and synchronize in order to move virtual machines around with unperceivable service downtime. Today I'll show you how to make the most of Hyper-V live migration using 10GbE networking in this article to maximize the 10GbE or faster network bandwidth to make that migration quicker.

When 1 Gbps Networking Isn't Fast Enough

The live migration of virtual machines copies and then synchronizes the RAM of VMs from one host to another. Imagine that you are running a data center with hosts and virtual machines that have huge amounts of RAM. Load balancing virtual machines, performing preventative maintenance (such as Cluster Aware Updating), or trying to drain a host for unplanned maintenance could take an eternity if you’re trying to move hundreds of gigabytes or even terabytes of RAM around on 1 Gbps networking. This was why it was recommended to look at embracing 10 GbE or faster networking, at least for the live migration network in a Hyper-V cluster. Sadly, even with all the possible tuning, Windows Server 2008 R2 could not get much more than 70% utilization of this bandwidth, and there wasn’t a good way to use this expensive NIC and switch port for more than just live migration.

Windows Server 2012 (WS2012) and later make adopting 10 GbE or faster networking more feasible. Firstly, WS2012 can make use of all of the bandwidth that you provide it, including 56 Gbps Infiniband. Secondly, you can (thanks to Quality of Service and converged networks) use this expensive bandwidth to perform many functions, which means you require fewer overall NICs and switch ports per host, helping to offset some of the investment in this higher capacity networking. For example, 10 GbE networking could be used for live migration, cluster communications, and the SMB 3.0 storage (which is more affordable than traditional block storage).

Tuning Your Hosts

Many who have convinced their bosses to invest in 10 GbE or faster networking have a disappointing first experience because they have not performed the necessary preparations. They’ll rush to do a test live migration, and maybe see 2 Gbps utilization with occasional brief spikes to 4 or 6 Gpbs and then rush to blame the wrong thing. Whether you are running W2008 R2 Hyper-V or faster, there are four things to consider if you want to make full use of the potential bandwidth:

  1. Firmware: Make sure you have the latest stable firmware for your host server and your network cards.
  2. Host BIOS Settings: Your host hardware might be configured to run at a less than optimal level, and therefore not handle the interrupts fast enough to make full utilization of the bandwidth. Your server’s manufacturer should have guidance for configuring the BIOS settings for Hyper-V. Typically you need to disable some processor C states and set the power profile to maximum performance.
  3. Drivers: Download and install the latest stable version of the network card driver for your version of Windows Server that is running in the host’s management OS. Do not assume that the in-box or Microsoft driver will be sufficient, as it usually is not.
  4. Jumbo Frames: Configure the maximum possible size jumbo frame on your NICs and the switch(es) in-between the hosts. The jumbo frames setting can be found (the name varies from one NIC model/manufacturer to another) in the Advanced settings of a NIC’s properties in Control Panel > Network Connections.

You should test end-to-end jumbo frames by running a ping command such as ping -l 8400 -f 172.16.2.52 to verify that all NICs and intermediate switches are working correctly. The –l flag indicates the packet size to be used by ping. Note that experience shows that this must be slightly smaller than the actual packet size configured in the NIC settings. The –f flag indicates that the packet should not be fragmented. A successful ping indicates that Jumbo Frames are working as expected.

Now you can test your live migration and you should see much more bandwidth being used, depending on if you have converged networks and if you have configured QoS (WS2012 or later) or SMB bandwidth limitations (WS2012 R2).

The results of taking a few minutes to tune a host are simply amazing. The below screenshot is taken from a host that is using dual iWARP (10 GbE NICs with RDMA) NICs for WS2012 R2 live migration. Both NICs are fully utilized providing live migration speeds of up to 20 Gbps. In this example, a Linux virtual machine with 56 GB of RAM is moving from one WS2012 R2 Hyper-V host to another in under 36 seconds.

Hyper-V Live Migration with 10GbE Network Bandwidth

Hyper-V live migration making full use of 10 GbE networking.


What's New in Windows Server 2012 R2 Hyper-V Live Migration

Windows Server 2012 (WS2012) Hyper-V Live Migration with a look at the new features and improvements of Live Migration in Windows Server 2012 (WS2012 R2) Hyper-V and Hyper-V Server 2012 R2.

Windows Server 2012 Live Migration Improvements

Live migration moves virtual machines from one location to another with no perceivable downtime to service delivery. It was first present in Hyper-V in Windows Server 2008 R2; there it could move the memory and process of a virtual machine from one host to another within the same Hyper-V cluster. The virtual machines' storage did not move, instead, it stayed static on the shared cluster storage.

Recently Windows Server 2012 added signification improvements to Live Migration, including the following.

  • Simultaneous live migration: More than one virtual machine could be moved at a time.
  • Queuing: Clusters can queue migrations if more virtual machines need to move than Live Migration slots are available.
  • Storage live migration: Any or all of the storage of a virtual machine can be moved from one storage location to another, such as from DAS to SAN to SMB 3.0 storage.
  • Clusters are not required: Failover clustering is not a requirement for WS2012 Hyper-V live migration.  You can perform live migration between any two WS2012 or Hyper-V Server 2012 hosts, clustered or not, as long as they share a common live migration network. Just make sure that your virtual machines will remain connected to a virtual switch with the same name (or the same logical network/switch in System Center Virtual Machine Manager).
  • SMB 3.0 live migration: Hosts, clustered or not, can live migrate virtual machines that are stored on common SMB 3.0 shares without the movement of files. This can greatly reduce the costs of virtualization for large hosting facilities, and introduce quick inter-cluster live migration.
  • Shared-nothing live migration: Two hosts can perform live migration of a virtual machine with no common storage. The process will move the files of the virtual machine using storage live migration before copying, synchronizing and switching over the memory/process of the virtual machine.

Why Use Windows Server 2012 R2 Hyper-V Live Migration?

That covers the “how” of live migration but not the “why” of live migration. The purpose of this feature, just like with VMware vMotion, is to provide agility and flexibility. The infrastructure needs the ability to adapt to the changing demands of services that are being delivered and the requirements of the business – without enforcing downtime on the business.

Thanks to live migration we can:

  • React to changes in resource utilization: System Center Virtual Machine Manager (VMM) provides Dynamic Optimization to load balance virtual machines across hosts within a Hyper-V cluster. It dynamically places virtual machines on the host with the most suitable available resources. Yes, Hyper-V and System Center do have an answer to vSphere DRS.
  • Optimize Power Consumption: An extension to Dynamic Optimization in VMM is Power Optimization, in which virtual machines within a Hyper-V cluster can be centralized to run on fewer hosts. Using baseboard management controllers, the power of idle hosts can be manipulated to reduce data center power costs.
  • Performance and Resource Optimization (PRO): System Center Operations Manager (PRO) can detect an issue on a clustered Hyper-V host and request VMM to react by using live migration to move virtual machines off of the affected host.
  • Perform maintenance: Whether it is scheduled (such as Cluster Aware Update) or not we can drain a host of virtual machines ahead of any necessary host maintenance. Note that Failover Cluster Manager (Pause) and VMM (Maintenance Mode) offer facilities to perform this drain for us on clustered nodes.
  • Implement a new network footprint: A large data center might want to install an entirely new network footprint. Virtual machines could be live migrated to new hosts in that footprint with no downtime. The changes to the IP ranges can be overcome by implementing Hyper-V Network Virtualization (HNV aka Windows Network Virtualization or WNV) to abstract the IP addresses of the virtual machines so that service communications are not interrupted.

Those are just some day-to-day examples of the many reasons we (or System Center on our behalf) would use live migration to move virtual machines around the data center. This all sounds fantastic. How could Microsoft top WS2012 live migration? Surely this would be impossible in such a short time frame? Well, Microsoft did surprise us when they announced the new live migration features in WS2012 R2.

Cross-Version Live Migration

Before we even talk about the coolest features of WS2012 R2 live migration we need to discuss how we get from WS2012 Hyper-V to WS2012 R2 Hyper-V. This topic, as with the rest of the discussion, includes the free Hyper-V Server. Roughly speaking, there are two generic ways to get your existing virtual machines on WS2012 Hyper-V to run on WS2012 R2 Hyper-V:

  • Upgrade: This is an in-place upgrade of the host. There is a significant amount of downtime and it is reserved for non-clustered hosts because Windows failover clustering does not support mixed Windows Server versions in the same cluster.
  • Migration: A new (or drained and rebuilt) host is prepared with WS2012 R2 Hyper-V and virtual machines are moved to it. This is the only option for clusters and has always had downtime. This time could be brief (by remapping CSVs using the Cluster Migration Wizard) or long (export/import virtual machines one-by-one).

Migration offers the least amount of downtime during an upgrade, but there is still downtime. And that's unfortunate for any service provider whether the customers are internal or external. It's bad enough if a new version of Windows Server is released every three years, but imagine all those engineering hours at 3 a.m. on a Saturday if Microsoft was to release a new version of Windows Server just 12 months after the last one and continue to keep up that pace... oh wait – they are doing that now.

This is why cross-version Live Migration was added to WS2012 R2 Hyper-V. With this feature we can build a host with WS2012 R2 (this could be an existing WS2012 host that is drained and rebuilt) and perform a one-way-only from WS2012 Hyper-V to WS2012 R2 Hyper-V Live Migration of virtual machines to the new host.

This does give us the ability to do a zero-downtime-to-service upgrade of our hosts or clusters. Don’t forget that you’ll need to deploy updated integration components to virtual machines that will require a reboot, but at least this can be automated and scheduled using something like System Center Configuration Manager (see collection maintenance windows).

Cross-Version Live Migration

Cross-version live migration for zero-downtime cluster rebuilds

There are limits to cross-version live Migration:

  • One-way: This is a one-way journey to WS2012 R2 Hyper-V. There is no roll-back without restoring from backup.
  • 2012-to-2012 R2: You can only do this from the 2012 generation of Hyper-V to the 2012 R2 generation of Hyper-V. Those who are running older generations such as 2008 or 2008 R2 should have kept up and will have no option but to have downtime.
  • Storage Capacity: You can decommission CSVs in the old cluster only after they are drained of virtual machines. You will need sufficient storage capacity to provision new CSVs in a new WS2012 R2 cluster while you perform the migration from the old cluster.

Sadly, we cannot do an in-place upgrade of a cluster. Microsoft is aware of our request for this, and cross-version live migration will ease the pain somewhat, but many will choose to go with the CSV migration approach of the Cluster Migration Wizard because they have limited storage capacity.

Live Migration Performance Options

One of the nicest improvements of WS2012 (not restricted to just live migration) was the ability to use the full capacity of 10 Gbps or faster networking. Hosts have increasing levels of capacity and virtual machines are getting bigger and bigger (up to 1 TB RAM in WS2012/R2 Hyper-V). It takes time to (a) copy and (b) synchronize the memory of those virtual machines between two hosts and that can introduce delays in planned or unplanned emergency maintenance. Adding 10 GbE or faster that can be used by live migration greatly reduces that time, maybe getting virtual machines off of a failing host before an interruption to service occurs.

Microsoft recognized that there was a need to optimize live migration beyond the algorithm improvements in WS2012. Even non-optimized 10GbE might not be enough for massive hosts. Many customers have investments in 1 GbE networking and have no plans to upgrade soon. This is why Microsoft has given us three types of live migration in WS2012 R2:

  • TCP/IP: The legacy method of Live Migration as found in WS2012 Hyper-V
  • Compression: Data is compressed before transport
  • SMB: The features of SMB 3.0 are used for Live Migration

These options can be found in the host settings in Hyper-V Manager:

Configuring WS2012 R2 Live Migration

Configuring live migration method in WS2012 R2 Hyper-V.

Live Migration Compression

The processor capacity of hosts is generally underutilized and Microsoft leverages this using compressed live migration. With this option enabled, live migration will use the spare processor capacity of the host to compress live migration on the source host and decompress it on the destination host. Hyper-V is very careful; it monitors the demands on the processor by higher priority tasks, such as virtual machines, and prioritizes their needs. So if there is no spare processor capacity, live migration traffic will not be compressed.

Live migration compression is the best option when you have 1 GbE networks. The improvements in live migration times are significant in the typical host where processor resources might be 25-33% utilized.

SMB Live Migration

Server Message Block (SMB) is the protocol used by Microsoft for file services. WS2012 introduced a new version called SMB 3.0. This added two significant new features to this data transfer protocol:

  • SMB Multichannel: Using Receive Side Scaling (RSS), SMB Multichannel can transmit data over multiple parallel streams in a single NIC. SMB Multichannel can also send data over multiple NICs between the source and destination server with dynamic discovery and failover. This can include the ability to send multiple streams over multiple NICs. In other words, SMB 3.0 can send and receive over huge amounts bandwidth such as 2 * 10 GbE or more.
  • SMB Direct: Processing huge amounts of bandwidth incurs a processor and latency cost. SMB 3.0 can use Remote Direct Memory Access (RDMA) enabled NICs (rNICs) to offload this processing to get faster data transfer with little processing cost.

WS2012 R2 Hyper-V can not only use SMB 3.0 networks for storing virtual machines on file servers (as with WS2012) but it also adds a new trick: WS2012 R2 Hyper-V can use these super-fast high-bandwidth networks to perform Live Migration. The recommendation from Microsoft is to use SMB-powered live migration when you have 10 Gbps or faster networking:

  • Single NIC: SMB Multichannel will use RSS to fill the bandwidth and perform the migration more quickly.
  • Multiple NICs: SMB Multichannel can use more than one NIC to double, triple, etc., the bandwidth that can be used by live migration.
  • RDMA: If the host has rNICs then SMB Direct will greatly reduce the processor requirements of the data transfer and reduce communications latency.

Live migration is the fastest option – assuming you have 10 GbE or faster networking – and for many virtual machines this brings the time required for each virtual machine down to the lowest possible theoretical time. There is a certain amount of time required to “build up” the virtual machine on the destination host, copy/synchronize the memory, and then “tear down” the virtual machine on the source host. Networking can improve the copy/transfer of the virtual machine, but it has no impact on the “build up” or “tear down."

NIC Teaming

Most of the conversations about WS2012 R2 live migration have stopped by now. There is one other improvement in WS2012 R2 that affects live migration. NIC teaming has a new load balancing mode in WS2012 R2 called Dynamic. This type of traffic distribution uses flowlets to spread the inbound and outbound traffic of a single data transfer across all of the team members (physical NICs) of a NIC team. This means that a team of 1 GbE NICs with compressed live migration could spread their transfer across all of the team members, thus giving you the extra bandwidth effect that SMB Multichannel gives 10 GbE or faster NICs.

Note that NIC teaming and RDMA are incompatible.

Observing the Possibilities

In tests, we have seen the following:

  • 1 GbE (including 1 GbE teams): Enable live migration compression. We have observed significant reductions in the time required for live migration over the legacy TCP/IP option.
  • 10 GbE networking: Enable SMB live migration. The performance of this option is incredible. You can use the concepts of converged networks (using QoS and SMB Multichannel Constraints) to merge the cluster, live migration, and SMB 3.0 storage networks of a Hyper-V cluster to balance costs and performance.
  • rNICs: Do not team your rNICs. This is because RDMA is incompatible with NIC teaming and Scale-Out File Servers require SMB Multichannel networks to be on different subnets. Instead, use the rNICs as cluster network 1 and cluster network 2 (and more if you are lucky enough to have sufficient rNICs) on two different subnets. Use SMB Multichannel Constraints to restrict SMB 3.0 to these networks.

What Is Windows Server 2012 Hyper-V Live Migration?

The capabilities are of Windows Server 2012 Live Migration and how this flexibility solution works.

What Is Live Migration?

Live Migration is the equivalent of vMotion. The purpose of this feature is to move virtual machines from one location to another without any downtime. Well, that’s the perception of Live Migration and vMotion. As anyone who as ever used these features in a lab will know, there is actually some downtime when vMotion or Live Migration are used. A better definition would be: Live Migration (or vMotion) allows you to move virtual machines without losing service availability. That’s a very subtle difference in definitions, which we will explain later on in this article.

The purpose of Live Migration is flexibility. Virtual machines are abstracted from the hardware on which they run. This flexibility allows us to match our virtual machines to our resources and to replace hardware more easily. It makes IT and the business more agile and response – all without impacting on the operations of the business.

Back to Basics

Often there is confusion between Live Migration and high availability (HA). This is due to the fact that Live Migration (and vMotion) historically required a host cluster with shared storage. But things have changed, and it’s important to understand the differences between Live Migration and HA.

Live Migration is a proactive operation. Maybe an administrator wants to power down a host and is draining it of virtual machines. The process moves the virtual machines, over a designated Live Migration network, with no drop in services availability. Maybe System Center wants to load balance virtual machines (VMM Dynamic Optimization). Live Migration is a planned and preventative action – virtual machines move with no downtime to service availability.

High availability, on the other hand, is reactive and unplanned. HA is the function of failover clustering in the Windows Server world. Hosts are clustered and virtual machines are marked as being highly available. Those virtual machines are stored on some shared storage, such as a SAN, a shared Storage Pool, or a common SMB 3.0 share. If a host fails, all of the virtual machines there were running on it stop. The other hosts in the cluster detect the failure via failed heartbeats. The remaining hosts failover the virtual machines that were on the now dead host. Those failed over virtual machines automatically power up. You’ll note that there is downtime.

Read those two paragraphs again. There was no mention of failover clustering when Live Migration was discussed as a planned operation. Windows Server 2012 Hyper-V Live Migration does not require failover clustering: You can do Live Migration without the presence of a cluster. However, HA is the reason that failover clustering exists.

Promises of Live Migration

There are two very important promises made by Microsoft when it comes to Live Migration:

  • The virtual machine will remain running no matter what happens. Hyper-V Live Migration does not burn bridges. The source copy of a virtual machine and its files remain where they are until a move is completed and verified. If something goes wrong during the move, the virtual machine will remain running in the source location. Those who stress-tested Live Migration in the beta of Windows Server 2012 witnessed how this worked. It is reassuring to know that you can move mission critical workloads without risk to service uptime.
  • No new features will prevent Live Migration. Microsoft understands the importance of flexibility. All new features will be designed and implemented to allow Live Migration. Examples of features that have caused movement restrictions on other platforms are Single Root IO Virtualization (SR-IOV) and virtual fiber channel. There are no such restrictions with Hyper-V – you can quite happily move Hyper-V virtual machines with every feature enabled.

Live Migration Changes in Windows Server 2012 Hyper-V

Windows Server 2012 features a number of major changes to Live Migration, some of which shook up the virtualization industry when they were first announced.

  • Performance enhancements: Some changes were made to the memory synchronization algorithm to reduce page copies from the source host to the destination host.
  • Simultaneous Live Migration: You can perform multiple simultaneous Live Migrations across a network between two hosts, with no arbitrary limits.
  • Live Migration Queuing: A clustered host can queue up lots of Live Migrations so that virtual machines can take it in turn to move.
  • Storage Live Migration: We can move the files (all or some) of a virtual machine without affecting the availability of services provided by that virtual machine.
  • SMB 3.0 and Live Migration: The new Windows Server shared folder storage system is supported as shared storage for Live Migration with or without a Hyper-V cluster.
  • Shared Nothing Live Migration: We can move virtual machines between two non-clustered hosts, between a non-clustered host and a clustered host, and between two clustered hosts.

Performance Enhancements

Let’s discuss how Live Migration worked in Windows Server 2008 R2 Hyper-V before we look at how the algorithm was tuned. Say a virtual machine, VM01, is running on HostA. We decide we want to move the virtual machine to HostB via Live Migration. The process will work as follows:

  1. Hyper-V will create a copy of VM01’s specification and configure dependencies on HostB.
  2. The memory of VM01 is divided up into a bitmap that tracks changes to the pages. Each page was copied from the first to the last from HostA to HostB. Each page was marked as clean after it was copied.
  3. The virtual machine is running so memory is changing. Each changed page is marked as dirty in the bitmap.  Live Migration will copy the dirty pages again, marking them clean after the copy. The virtual machine is still running, so some of the pages will change again and be marke as dirty. The dirty copy process will repeat until (a) it has been done 10 times or (b) there is almost nothing left to copy.
  4. What remains of the VM01 that has not been copied to HostB is referred to as the state. At this point VM01 is paused on HostA.
  5. The state is copied from HostA to HostB, thus completing the virtual machine copy.
  6. VM01 is resumed on HostB.
  7. If VM01 runs successfully on HostB then all trace of it is removed from HostA.

This process moves the memory and processor of the virtual machine from HostA to HostB, both in the same host cluster. The files of the virtual machine are on some shared storage (a SAN in Windows Server 2008 R2) that is used by the cluster.

It is between the pause in step 4 and the resume in step 6 that the virtual machine is actually offline. This is where a ping test drops a packet. Ping is a tool based on the ICMP diagnostic protocol. Ping is designed to find latency. That’s exactly what happens when that ping fails to respond during Live Migration or vMotion. The virtual machine is briefly unavailable. Most applications are based on more tolerant protocols which will allow servers several seconds to respond. Both vMotion and Live Migration take advantage of that during the switch over of the virtual machine from the source to the destination host. That means your end users can be reading the email, using the CRM client, or connected to a Citrix XenApp server, and they might not notice anything other than a slight dip in performance for a second or two. That’s a very small price for a business-friendly feature like Live Migration or vMotion.

Aside from the cluster requirement, the other big change in this process in Windows Server 2012 is that the first memory copy from HostA to HostB has been tuned to reflect memory activity. The initial page copy is prioritized, with least used memory being copied first, and the most recently used memory being copied last. This should lead to fewer copy iterations and faster Live Migration of individual virtual machines.

Simultaneous Live Migration

In Windows Server 2008 R2, we could only perform one simultaneous Live Migration between any two hosts within a cluster. With host capacities growing (up to 4 TB RAM and 1,024 VMs on a host) we need to be able to move virtual machines more quickly. Imagine how long it would take to drain a host with 256 GB RAM over a 1 GbE link! Hosts of this capacity (or greater) should use 10 GbE networking for the Live Migration network. Windows Server 2008 R2 couldn’t make full use of this bandwidth – but Windows Server 2012 can. Combined with simultaneous Live Migration, Hyper-V can move lots of virtual machines very quickly, taking advantage of 10 Gbps, 40 Gbps, or even 56 Gbps networking! This makes large data center operations happen very quickly.

The default number of simultaneous Live Migrations is two, as you can see in the below screenshot. You can tune the host based on its capabilities. Running too many Live Migrations at once is expensive; not only does it consume the bandwidth of the Live Migration network (which might be converged with other networks) but it also consumes resources on the source and destination hosts. Don’t worry – Hyper-V will protect you from yourself. Hyper-V will only perform the number of concurrent Live Migrations that it can successfully do.

Windows Server 2012 Hyper-V Live Migration simultaneous

A common question is this: My source host is configured to allow 20 concurrent Live Migrations and my destination host will allow five. How many Live Migrations will be done? The answer is simple: Hyper-V will respect every host’s maximum, so only five Live Migrations will happen at once between these two hosts.

You might also notice in the above screenshot that Storage (Live) Migration also has a concurrency limit, which defaults to two.

Live Migration Queuing

Imagine you have a cluster with two nodes, HostA and HostB. Both nodes are configured to allow ten simultaneous Live Migrations. HostA is running 100 virtual machines and you want to place this host in maintenance mode. Failover Cluster manager will orchestrate the Live Migration of the virtual machines. All virtual machines will queue up, and up to ten (depending on host resources) will live migrate at the same time. As virtual machines leave HostA, other virtual machines will start to live migrate, and eventually all of the virtual machines will be running on HostB.

Storage Live Migration

A much sought-after feature for Hyper-V was the ability to relocate the files of a virtual machine without affecting service uptime. This is what Storage Live Migration gives us. The tricky bit is moving the active virtual hard disks because they are being updates. Here is how Microsoft made the process work:

  1. The running virtual machine is using its virtual hard disk which is stored on the source device.
  2. An administrator decides to move the virtual machine’s files and Hyper-V starts to copy the virtual hard disk to the destination device.
  3. The IO for the virtual hard disk continues as normal but now it is mirrored to the copy that is being built up in the destination device.
  4. Live Migration has a promise to live up to; the new virtual hard disk is verified as successfully copied
  5. Finally the files of the virtual machine can be removed from the source device

Windows Server 2012 Hyper-V storage live migration

 

Storage Live Migration can move all of the files of a virtual machine as follows:

  • From on folder to another on the same volume
  • To another drive
  • From one storage device to another, such as from a local drive to an SMB 3.0 share
  • You can move files from one server to another

When using Storage Live Migration, you can choose to:

  • Move all files into a single folder for the virtual machine
  • Choose to only move some files
  • Scatter the various files of a virtual machine to different specified locations

SMB 3.0 and Live Migration

Windows Server 2012 introduces SMB 3.0 – an economic, continuously available, and scalable storage strategy that is supported by Windows Server 2012 Hyper-V. Live Migration supports storing virtual machines on SMB 3.0 shared storage. This means that a virtual machine can be running on HostA and be quickly moved to run on HostB, without moving the files of the virtual machine. Scenarios include a failover cluster of hosts using a common SMB 3.0 share and a collection of non-clustered hosts that have access to a common SMB 3.0 share.

Shared-Nothing Live Migration

Thanks to Shared-Nothing Live Migration we can move virtual machines between any two Windows Server 2012 Hyper-V hosts that do not have any shared storage. This means we can move virtual machines:

  • Move the virtual machine that is stored on the local drive of a non-clustered host to another non-clustered host, and store the files on the destination host’s storage.
  • From a non-clustered host to a clustered host, with the files placed on the cluster’s shared storage. Then we can make the virtual machine highlight available to add it to the cluster.
  • Remove the highly available attribute of a virtual machine and move it from a clustered host to a non-clustered host.
  • Remove the highly available attribute of a virtual machine and move it from a host in a source cluster to a host in a destination cluster, where the virtual machine will be made highly available again.

In other words, it doesn’t matter what kind of Windows Server 2012 or Hyper-V Server 2012 host you have. You can move that virtual machine.

Hyper-V Is Breaking Down Barriers to IT Agility

You can easily move virtual machines from one host to another in Windows Server 2012 Hyper-V. The requirements for this are as follows.

  • You are running Windows Server 2012 Hyper-V on the source and destination hosts.
  • The source and destination hosts have the same processor family (all Intel or all AMD).
  • If there are mixed generations of processor then you might have to enable processor compatibility mode in the settings of the virtual machine. It is a good idea to always buy a new processor when acquiring new hosts – this gives you a better chance at buying compatible host processors in 12-18 months’ time.
  • The hosts must be in the same domain.
  • You have at last 1 GbE of connectivity between the hosts. This bandwidth can be a share of a greater link that is guaranteed by the features of converged networking (such as QoS). Ideally, you will size the Live Migration network according to the amount of RAM in the hosts and the time it takes to drain the host for maintenance.

With all of those basic requirements configured, the only barrier remaining to Live Migration is the network that the virtual machine is running one. For example, if the virtual machine is running on 192.168.1.0/24 then you don’t want to live migrate it to a host that is connected to 10.0.1.0/24. You could do that, but the virtual machine would be unavailable to clients… unless the destination host was configured to use Hyper-V Network Virtualization, but that’s a whole other article!


Update Hyper-V Hosts

Hyper-V, as you probably know by now, is being introduced more and more as a virtualization host (a "host" is a physical computer/server that runs a virtualization product, and which is used to run multiple virtual machines, also called "guests"). Because Hyper-V is based on a Windows 2008/R2 operating system, we need to pay close attention to update Hyper-v ; patches, bug fixes, security fixes and critical updates that are released by Microsoft. Also remember that patches and updates can come from any number of software products (and not just Microsoft), software products such as backup agents, drivers and firmware, as well as management, monitoring and anti-virus software.

Before updating the host we need to consider several key issues.

How important is a patch management for virtual hosts?

Virtual hosts are computers running Microsoft-based operating systems (naturally, there are other options such as VMware-based or XEN-based hosts, but I do not discuss about these in this article). Virtual machine host updates are just as important as keeping any Windows-based operating system up to date, which in turn will help maintain a stable and secure virtual host environment.

Coordinating the right time to apply the patching

The coordination of your host patches is important. Your design objective should be to follow host patch management best practices with as few disruptions to your most critical VMs, mostly because:

1. Some (but not all) of the hyper-v updates might need a reboot of the host. This means that you will need to find the right point in time to do that. You can work around this by implementing Hyper-V Quick Migration, which allows you to move guest VMs to another Hyper-V host with little guest interruption. In Windows Server 2008 R2, you can use Live Migration, which allows you to move guest VMs to another Hyper-V host without any guest downtime or interruption to service.

2. When applying patches, some might require that the guest VMs be in a shutdown state when the patches are applied. This means that you cannot put these VMs in a saved state for faster resuming. To determine VM status requirements, read the patch's release notes.

3. Sometimes, for some patches, the guest VMs might need to also be updated. For example, Service Pack 2 required that the guest VMs update the Integration Components (IC) for Hyper-V. Again, read the patch's release notes for more information.

4. Some patches ***might*** cause issues with either the VMs or the host itself, resulting in a longer than planned downtime. So far, this wasn't the case with Windows Server 2008 Hyper-V patches, but if you recall VMware's update 2 for ESX/ESXi, and the fiasco that followed that, then you must be aware of that potential issue. Hopefully, we won't see a similar ***issue*** with Microsoft's updates

Reduce the number of needed patches by using Server Core or Hyper-V Server 2008

In Windows Server 2008 and R2, the Server Core installation provides a minimal environment for running specific server roles, which reduces the maintenance and management requirements and the attack surface for those server roles (read more about Server Core on my Understanding Windows Server 2008 Server Core and Installing Windows Server 2008 Core articles).

Besides regular Server Core, you can opt to use Microsoft Hyper-V Server 2008/R2. Hyper-V Server is a slimmed-down Server Core installation version of Windows Server 2008, but even more stripped-down than regular Server Core, and only with the functionality specific to running only the Hyper-V role. The benefit of these small, slimmed-down versions is their attack surfaces. Because there are fewer components that are installed on the system, this means that there is a decreased number of patches that is needed for the virtualization host and.

Another benefit of using Server Core or Hyper-V Server instead of the full installation of Windows Server 2008/R2, is that there is a lower resource usage by the host itself, leaving more CPU and memory to the guest VMs.

However, there are some tradeoffs with Server Core or Hyper-V Server, mostly related to the lack of GUI-based management tools and a higher learning curve. You can look for additional Server Core articles on the site for more information on how to easily manage this type of installation.

Reduce downtime by using Failover Clustering

As noted above, implementing Live Migration will greatly reduce the downtime of your virtual machine guests due to host maintenance and patching. By implementing Live Migration, you will be able to seamlessly move VM guests from one host to another in the datacenter, without ANY conceivable downtime to the VMs, the data, applications and/or services installed on them.

As a tradeoff, implementing Failover Clustering has some considerations you need to take in place, mostly due to the fact that it requires the Enterprise or Datacenter editions of Windows Server 2008 R2, plus the introduction of storage devices, if you do not have those already. However, if service level is a concern and if downtime of hosts and VMs is closely monitored, then Failover Clustering is the answer.

Performing the actual update

Installing the updates or patches is usually pretty easy. In Windows Server 2008/R2 that runs in full installation mode, installing the updates is usually done by either using Windows Updates, by using Windows Software Update Service (WSUS).

Update Hyper-V

Update Hyper-V

As noted above, when selecting the automatic updates options, make sure you consider the fact that, if a critical update is detected, the Hyper-V host will download it and install it that night, at 3 AM. This means that the guest VMs might need to be shut down if the host needs to reboot. This means moments of downtime for applications, data or services that are located on those guest VMs.

Update Hyper-V

To help you mitigate the downtime you might consider configuring the VM behavior when the host is shutdown...

Update Hyper-V

And what happens to the VM when the host restarts.

Update Hyper-V

You can also install updates by manually downloading the .MSU files and installing them yourself.

Description of the Windows Update Stand-alone Installer (Wusa.exe) and of .msu files in Windows Vista and in Windows Server 2008

http://support.microsoft.com/kb/934307/en-us

Msiexec (command-line options)

http://technet.microsoft.com/en-us/library/cc759262(WS.10).aspx

On Server Core installations, because there is no GUI to work with, you can use several methods to install updates. These are listed in my Installing Windows Updates on Windows Server 2008 R2 Core article.

For a good place to start looking for these updates and patches you can use these 2 links:

Comprehensive List of Hyper-V Updates

http://technet.microsoft.com/en-us/library/dd430893(WS.10).aspx

Hyper-V Update List for Windows Server 2008 R2

http://technet.microsoft.com/en-us/library/ff394763(WS.10).aspx

As a conclusion, working with virtualization does not exempt you from taking care of the patching and updating of the virtual hosts. Failing to do so might introduce security and functionality issues to your system, which is why it's to update hyper-v.

Manually Migrating a VM Between Hyper-V Hosts

Manually moving VMs between Hyper-V hosts means that you need to perform a manual (or scripted) export of each VM you wish to migrate, copy the files to the target Hyper-V host, and then import the VM on that host. This process takes time because it requires you to wait until the exporting process is done before you can copy the VHD and other VM files to the target host. Once copied, you need to manually import the VM, which doesn't take as much time as the export operation, but still requires time. In addition, the migrated VM will experience a prolonged downtime, due to the fact that it must be shut down on the original host, exported in the shut down state, copied across the network to the target host, then imported and booted.

Note that this article only deals with Hyper-V R2, and not with the RTM version of Hyper-V. Therefore, there may be changes in functionality, and if you're using the RTM version (isn't it time to upgrade already?), you may find that some of the options listed here are not available to you.

Considerations before performing the migration

When you want to manually move a virtual machine from one Hyper-V host to another, you must use the "Export" option on the source VM, and then the "Import Virtual Machine" option on the target machine. The are basically 2 things you should do to make this import process go smoothly:

  1. Make sure the same Virtual Network is defined on the import host. This means it will connect the imported VM to the correct network on the target host. Without doing this, you will receive an error when attempting to import the VM (although this can be ignored, and then fixed by manually setting the network on the VMs' Settings page).
  2. Make sure you use processors from the same manufacturer on the source and target hosts. If the CPUs are from the same manufacturer but not from the same type, you may need to use Processor Compatibility. More about this in a future article.

In addition, remember that the export-copy-import procedure will take time. Sometimes, this will take a lot of time, depending on the size of VM and the type of underlying network connection. During all this time, the VM will be turned off and inaccessible to clients.

Exporting the VM

The Import and Export functions are accessible through the Hyper-V Manager. The virtual machine export process itself is actually really simple to perform.

First, make sure the VM is shut down, or save its state. You cannot export a VM unless it's shut down or in a saved state.

Hyper v manager, export virtual machine

Once the VM is shut down or in a saved state, the "Export" option (or link) will be available.

Simply select the virtual machine from the Hyper-V Manager, and then click on the "Export" option (or link).

Hyper v manager, export virtual machine

After doing so, Windows will display the Export Virtual Machine dialog box. You will need to point the export directory to the exact location that you want to import from. Note that exporting the VM will NOT alter the existing VM in any way. Its files will not be deleted or moved, and you will still be able to use it if you want to.

Export virtual machine

The export process will gather all the VM-specific files into the target folder, creating a folder with the exact name of the VM. BTW, this is a nice way to actually get all these files into one place (snapshots files, VHD files, XML configuration files) instead of having to guess where they are located. (Note: you CAN manually set these file locations during the creation and configuration of the VM if you want to).

Export virtual machine

Depending on the size of the virtual machine (more specifically the size of the virtual disks), the export process can take a while to complete. Small-sized VMs will take anywhere between 5 to 10 or 15 minutes to export, while the export of large VMs using big VHD files might take considerably longer. Remember, during all this time, the original VM must be turned off, therefore the downtime may be substantial.

Migrate virtual machine manually

Copying the VM to the target host

Once the export is done, you'll have all the files ready in the target folder. Now it's up to you to move the files contained within that folder to the target virtualization host. This can be done by various methods, from copying them across the network, to grabbing them all on a portable disk and physically moving them to their target host.

Copy virtual machine to target host

Note: When you import the virtual machine, its physical location on the host server becomes permanent. Moving the virtual machine is no longer an option unless you export it again. Therefore it is important to place the VM files on the desired volume before you import it.

Importing the VM

When the copying process is over, you will need to import the VM. Open the Hyper-V Manager and click the Import Virtual Machine link. You should now be prompted to enter the virtual machine’s path.

Note: In Hyper-V RTM, once a VM was imported you could no longer re-import it. This was fixed in R2.

In Hyper-V Manager on the target host, click on the server name and select "Import Virtual Machine".

Hyper v manager, import virtual machine

Browse and locate the top parent folder of the copied exported machine.

Hyper v manager, import virtual machine

Once selected, click on the "Import" button for the process to begin.

Hyper v manager, import virtual machine

The import will be almost instantaneous, since the files are already on location. It just needs to create the VM configuration based on the data from the exported VM. Once imported, you can start the VM.

Remember to never turn on both VMS - the source and the target, because you might get all sorts of conflicts. Make sure you permanently shut down (and better - remove) the source VM, unless there is reason to keep it (such as creating templates from one VM).

Hyper v manager, import virtual machine

Total downtime for this procedure depends on the size of the VM's VHD file(s), the speed of the copy operation, and the speed of your manually (or scripted) import operation.


Using Quick Migration to Migrate a VM Between Hyper-V Hosts

With quick migration, you can rapidly migrate a running virtual machine from one physical host system to another with minimal downtime.

Using Windows Server Hyper-V or Windows Server 2008 with Hyper-V, and the quick migration capability, you can consolidate physical servers while maintaining the availability and flexibility of business-critical services during scheduled maintenance, or quickly restore services after unplanned downtime.

In order to use this method you must install System Center Virtual Machine Manager 2008 (preferably R2, and this is the version I will be referring to from now on), and use it to perform the migration.

When used on a failover cluster, Quick Migration is an automatic process:

For a planned quick migration, the operation saves the state of a running guest VM (memory of original server to disk/shared storage), moves the storage connectivity from one physical host to another, and then restores the guest VM to the second host (disk/shared storage to memory on the new server). When migrating the VM, there will be some downtime for the migrated VM. The length of this downtime is related to the amount of RAM that is configured for the VM and the speed of the network subsystem.

In the case of unplanned downtime, the system cannot save the state of the workload and running VM. Instead, the entire VM would be failed over to another host in the cluster, automatically, and there it will be booted from a cold state.

Important: You do NOT need to have a failover cluster of 2 or more Hyper-V hosts to allow Quick Migration. However, without deploying a failover cluster, there will be no solution for unplanned server failure, and the entire process will need to be a manual process. If you do decide to use a failover cluster, you must use Windows Server 2008 Enterprise or Windows Server 2008 Datacenter editions.

Note that this article only deals with Hyper-V R2, and not with the RTM version of Hyper-V. Therefore, there may be changes in functionality, and if you're using the RTM version (isn't it time to upgrade already?), you may find that some of the options listed here are not available to you.

Also, I will not deal with the automatic failover feature of Quick Migration, and instead I will use it to move a VM from one host to another with as little downtime as possible for this type of migration.

Considerations before performing the migration

When you want to move a virtual machine from one Hyper-V host to another, you must use the built-in migration option from SCVMM.  Also, there are basically 2 things you should do to make the import process go smoothly:

Make sure the same Virtual Network is defined on the target host - this means it will connect the moved VM to the correct network on the target host. Without doing this, you will need to manually set the network on the VMs' Settings page.

Make sure you use processors from the same manufacturer on the source and target hosts. If the CPUs are from the same manufacturer but not from the same type, you may need to use Processor Compatibility. More about this in a future article.

Performing the migration

Open the SCVMM 2008 R2 console, and expand Virtual Machines > All Hosts. Find the host on which the VM is running.

The VM that you're about to move does not need to be turned off or in a saved state. It can be running, servicing users and applications. However, remember that there will be some minimal downtime while the migration takes place, so users connected to that machine might experience some symptoms related to a machine loosing temporary network connection. For most applications, the timeout of the application is longer than the downtime, therefore the users will not need to reconnect to the machine.

Right-click the VM and select "Migrate". Note that this command will either do a Quick Migration or a Live Migration, based upon the system's settings. In this case, because I do not have Failover Clustering in place, it will do a Quick Migration.

SCVMM Quick Migration

You will get a message telling you that there's going to be a temporary downtime for the VM. Click "Yes".

SCVMM temporary downtime on virtual machine warning

You'll be presented with a list of all your Hyper-V (and actually - VMware ESX/ESXi hosts - which I do not have in place for this example). Select the host you wish to migrate the VM to. Note how SCVMM automatically generates performance ratings based upon CPU, disk, memory and network loads for the target hosts. Select the target host and click "Next".

SCVMM Quick Migration, select target host

Select the path of the VM on the target host. Ideally, this should be identical with the source host, but I don't think they must be identical. Make sure you've got enough disk space. Click "Next".

SCVMM Quick Migration, select virtual machine path

If networking is properly configured, the target network will be automatically selected and identical to the source network. If not, you'll need to manually select it (more on this in a future article). Click "Next".

Quick Migration, select target network

Examine the summary and when satisfied, click "Move". Note you can also copy the PowerShell script used to perform the migration and use it in the future. Clicking "Cancel" at this point cancels the process without making any change.

The Jobs window will open showing you the completion percentage of the job.

Quick Migration in progress

Back in the SCVMM console, you can see a progress bar. Note how the status is "Under Migration".

Quick Migration in progress

During this time, the VM is actually running with no downtime yet. In the background, SCVMM creates a snapshot (or more precisely - a checkpoint) of the VM. This causes all new hard disk I/Os to be written to a temporary AVHD snapshot file. Then, the VM's VHD files are copied to the target host. This takes some time, depending on the size of VHDs and on the network speed.

Hyper-V Manager snapshot

Notice how during this time, the VM is still active, however by looking at the Network tab in Task Manager we see that there is some massive network activity going on.

SCVMM Quick Migration

If we ping the machine during this time, you will receive a reply (unless ICMP replies were disabled in the firewall).

The moment the VHD copying is over, the VM state will be saved, meaning there can be no more writing to the AVHD file.

Virtual machine state saved

The VM's state is then sent to the target host. This will take several seconds, depending on the amount of activity, network speed and so on.

Virtual machine's state saved to target host

When the transfer of the saved state is over, we see how the VM is restored from its saved state on the target host.

Virtual machine restored from saved state onto target host

During that time, we see a temporary loss of connectivity, but it soon comes back online when the VM is fully loaded from the saved state.

Quick Migration, temporary loss of connectivity

Bingo. Virtual Machine is now up and running on the target host.

Quick Migration of virtual machine completed

Moving a Virtual Machine Between Hyper-V Hosts

A virtual machine (or VM) is an entity that is stored on one virtualization host, and that runs an instance of an operating system and uses resources that are presented to it by the virtualization software on that host. In Hyper-V, a VM uses a set of configuration files, and one or more virtual disks (in the format of VHD files).

There may come a time when there is a need to move a VM or more between virtualization hosts. I will not go into a debate of why this needs to be done, however, I will just mention that the most common reasons for this would be:

  • To free up a virtualization host for maintenance tasks that may require a reboot
  • To free up resources on the virtualization host
  • To re-arrange VM placement

Before moving or migrating a VM from one host to another, one should carefully consider these issues:

  • Is there downtime involved when moving the VM?
  • How long is the downtime?
  • How long does the actual migration take?
  • Is performance of other VMs and the virtualization host and/or network affected by the migration process?

While other potential issues may play a key role in the migration planning, the main rule of thumb when moving the VM is as follows. Moving the VM should be as fast as possible, with little or no downtime to the VM, and with a limited effect on the performance of other VMs on that host or on the network subsystem.

Let's talk a bit about how you can migrate a virtual machine between 2 Hyper-V hosts.

Note that this article only deals with Hyper-V R2, and not with the RTM version of Hyper-V. Therefore, there may be changes in functionality, and if you're using the RTM version (isn't it time to upgrade already?), you may find that some of the options listed here are not available to you.

Basically, there are 3 possible ways to migrate a virtual machine between  Hyper-V hosts:

By manually exporting and importing VMs

Manually moving VMs between Hyper-V hosts means that you need to perform a manual (or scripted) export of each VM you wish to migrate, copy the files to the target Hyper-V host, and then import the VM on that host. In order to export a virtual machine, you can simply right-click that VM within Hyper-V Manager and choose the "Export" option. On the new virtual host you want to import to, you can do the same process in reverse by choosing the "Import" option from the Action menu in Hyper-V Manager.

Performing a migration between hosts in this manner will assure all settings are copied along with the VHD file, and also ensure that network adapter settings are not lost.

However, this process takes time because it requires you to wait until the exporting process is done (unless the entire process is done by using a script), and only then copy the VHD and other files that make the VM to the target host. Once copied, you need to manually import the VM (or again, use a script to do it for you), which doesn't take as much time as the export operation, but still requires time. In addition, the migrated VM will experience a prolonged downtime. This is due to the fact that it must be shut down on the original host, exported in the shut down state, copied across the network to the target host, and then imported and booted.

To learn more about how to manually migrate a VM from one Hyper-V host to another, please read my Manually Migrating a VM Between Hyper-V Hosts article.

By using Quick Migration in SCVMM 2008

In order to use this method, you must install System Center Virtual Machine Manager 2008 (preferably R2, and this is the version I will be referring to from now on), and use it to perform the migration. You do NOT need to have a failover cluster of 2 or more Hyper-V hosts to allow Quick Migration. However, when migrating a VM, there will be some downtime for the migrated VM. The length of this downtime is related to the amount of RAM that is configured for the VM and the speed of the network subsystem. To learn more about how to migrate a VM from one Hyper-V host to another by using Quick Migration, please read my Using Quick Migration to Migrate a VM Between Hyper-V Hosts article.

By using Live Migration in SCVMM 2008

This method was introduced in Hyper-V R2. In order to use this method you must install System Center Virtual Machine Manager 2008 R2, and use it to perform the migration. Also, in order to allow for Live Migration, you must have at least 2 Hyper-V nodes configured in a failover cluster. The major benefit of Live Migration is the fact that it can be used to migrate VMs without any type of downtime to the VMs.

Microsoft Hyper-V Server 2008 R2 includes Live Migration and host clustering. Therefore, you do not need to buy the full featured Windows Server 2008 R2 operating system just to get Live Migration. However, remember that when using Hyper-V Server 2008 R2, each guest OS must have a valid license. When using the full featured Windows Server 2008 R2 operating system, you'll have the advantage of the various licensing benefits that are given to you by Microsoft.

  • Live Migration is supported on up to 16 node failover clusters.
  • For production deployment, up to 64 VMs per node are supported. Customers must plan for adequate capacity when a failover occurs and VMs from the failed host are brought online on different nodes of the cluster.
  • For Live Migration to work, all cluster nodes must have processors from the same processor vendor, for example Intel or AMD.
  • Live Migration or Quick Migration will work when moving VMs from an older processor to a newer processor. However, to migrate a VM from a newer processor to an older processor, the VM must be turned off and restarted after moving the VM.
  • A new functionality is the “Processor Compatibility” option. This functionality can be enabled for VMs to allow both Live Migration and Quick Migration from newer to older processors from the same manufacturer. The setting is under VM settings > Processor > “Migrate to a physical computer with a different processor version”.

Upcoming article, "Using Live Migration to Migrate a VM Between Hyper-V Hosts" will feature in-depth coverage of migrating a VM from one Hyper-V host to another by using Live Migration.


Designing a Hyper-V Virtual Machine

When you specify a physical server, you typically figure out the requirements of the operating system and application; determine support and performance requirements; and configure the required storage, memory, processors, networking, and so on. This doesn’t really change with Hyper-V, or any virtualization platform for that matter. In this post I will discuss some of the things you should consider when configuring a virtual machine to run on Hyper-V.

Support for Virtualization

Will your desired guest operating system and software (a) run in a Hyper-V virtual machine and (b) support Hyper-V or virtualization? You cannot just assume that your software and OS will work and be supported.

Microsoft has listed the supported guest operating systems on TechNet. Note that several Linux distributions are listed at this time (this list continues to grow).

  • CentOS
  • Red Hat Enterprise Linux
  • SUSE Linux Enterprise Server
  • Open SUSE
  • Ubuntu
  • Oracle Linux (thanks to a unique Oracle/Microsoft partnership)

“Supported” and “works” have very different meanings for Microsoft.

  • Supported: The above Linux distributions have support. This means Microsoft has the ability to engineer solutions to resolve any issue you might have.
  • Works: Lots of x86 or x64 operating systems work on Hyper-V. Any Linux with a modern kernel will even have the Hyper-V drivers built-in. However, Microsoft will not provide technical support for these unsupported guest OSs.

You next need to check the support statement for the software that you want to run. This is information you need to get from the software vendor. For example, Microsoft posts their information on the SVVP site and Oracle’s support statement for Hyper-V can be found here.

Note that support statements are sometimes more than just yes/no decisions. Some software such as SharePoint or Exchange have very detailed guidance on how to specify a virtual machine, and which features of virtualization (not just Hyper-V) that they support.

Finally, you need to consider hardware. Can a Hyper-V virtual machine scale to your needs? For all but a handful of servers, Hyper-V can do it: 64 virtual processors, 1 TB RAM, 256 x 64 TB virtual hard disks, the ability to use iSCSI and Fiber Channel LUNs, and Shared VHDX. But peripheral hardware can be a challenge. There is no SCSI device pass through, PCI virtualization, or USB pass through. If special hardware if required, such as an ISDN card, then virtualization is not possible for that machine.

Configuring the Virtual Machine

In a traditional virtualization you will create each virtual machine according to the needs of the service that runs within it. In a cloud where self service is the rule, you will need to create lots of hardware profiles (using the System Center Virtual Machine Manager (SCVMM) approach) and/or provide a means for tenants to customize virtual machines at or after the time of deployment. In the hosting world, this is a means of generating more revenue – you do want the tenant to consume and pay for more resources, after all!

Generation 1 or Generation 2 VM

Windows Server 2012 R2 Hyper-V added a new kind of UEFI-based and optimized hardware type called Generation 2. While this is supported by SCVMM 2012 R2, many customers might be conservative and stick with Generation 1. Alternatively, going with Generation 2 will prepare your VMs for whatever Microsoft has planned for the future. At this point, I have not formed an opinion on this decision. You might need to consult with your backup solution vendor for support.

Virtual Processors

Each virtual processor will consume resources from a logical processor (a core or a thread) on the host. Assign only enough logical processors to keep the CPU run queue length in the guest OS at a reasonable level. Do not over-assign virtual processors; the unused vCPUs will needlessly occupy host logical processors and therefore starve other virtual machines of the ability to become active.

Some applications require that a virtual machine has complete ownership of half or a full logical processor on the host for each vCPU. You can use the Virtual Machine Reserve setting to accomplish this.

Designing a Hyper-V Virtual Machine

Configuring processor settings in a Hyper-V VM.

Memory

Do not assume that Dynamic Memory is supported. For example, the Exchange Mailbox role has restrictions on memory optimization on all virtualization platforms. There are two approaches to setting the Maximum RAM setting:

  • The default: This is set to 1 TB RAM. Memory will be assigned to the guest as required up to this limit or the limit of the host (probably encountered first). I really dislike this “elastic” setting; memory leaks in a guest OS plus grumpy tenants who refuse to pay will ruin your day… regularly.
  • Realistic: If you know a service requires a maximum of 8 GB RAM then set that as the maximum RAM of the VM. Allow the VM to balloon down when idle.

What about startup RAM? In production, I will set this to at least 1 GB. This is enough for a tenant to be able to install SQL Server without opening a helpdesk call. Ideally, I set startup RAM to whatever is required to get the (potentially) installed services working correctly.

Storage

Deploy storage in the VM as if you were planning LUNs in a server. Use the first disk for the operating system. If you plan to use Hyper-V Replica, then place the paging file in a second VHDX file (to allow exclusion from replication). All data should be stored in dedicated VHDX files on a SCSI controller. This gives more granular control and allows you to take advantage of WS2012 R2 online resizing of VHDX files.

Network

I always use the synthetic network adapter for production because the legacy NIC has a performance cost. The legacy NIC is the only way to get PXE in a VM in Generation 1 VMs, but I only use that in a lab. I deploy production VMs from templates (with SCVMM) or from a generalized (Sysprep) VHDX file (without SCVMM).

I will usually only have one NIC in a VM; the NIC is connected to a virtual switch and the virtual switch is connected to a NIC team. There are rare exceptions:

  • A VM must connect to multiple physically isolated networks.
  • SR-IOV is being used and two NICs in the VM are required for NIC teaming.

Start Up and Shutdown

You can control and/or delay the automatic start up of a virtual machine. I will automatically start a virtual machine if it was running when the host was shutdown. I try to delay the start up by a few minutes. You can choose to stagger that start up if you want to model services dependencies.

The shutdown action is more interesting. If you choose to place virtual machines in a saved state when the host is shut down, then Hyper-V will maintain a .BIN file with the VM to reserve disk space for the current amount of RAM in the VM. You should account for this disk space when planning physical storage. In WS2012 and later, Hyper-V won’t keep that .BIN file if you choose an alternative setting.