26 Sept 2013

Hyper-V Support in Windows 8

Virtual Machines (VMs) are great for a wide variety of tasks including running different operating systems or software configurations on a single machine. Windows 8 is the first Windows client operating system to include hardware virtualization support without the need for separate downloads or installs. This feature in Windows 8 is called Client Hyper-V. Client Hyper-V is the same technology as used in Windows Server 2012 Hyper-V, so you can move VMs from server to client, and won't need to re-learn how to use Hyper-V features and tools.

In this article I'll give a high-level overview of Client Hyper-V in Windows 8 and also guide you through the process of configuring Hyper-V and creating/running VMs.

What You'll Need to Run Hyper-V on Windows 8

In order to run Client Hyper-V on Windows 8 you'll need the following:

  • Windows 8 Pro or Enterprise 64 bit Operating System
  • 64 bit processor with Second Level Address Translation (SLAT)
  • BIOS-level Hardware Virtualization support
  • At least 4GB system ram

If you are running 64-bit Windows 8 and meet the hardware requirements listed above you're equipped to give Client Hyper-V a try!

Setting Up Hyper-V

The first thing to do when getting ready to run Client Hyper-V is to make sure that hardware virtualization support is turned on in the BIOS settings. For my demonstration in this article I configured my HP Z820 desktop system to run Client Hyper-V. Below is a picture of the BIOS settings I configured on my HP Z820:

HP-Z820-Virtualization-Settings-BIOS-600

Once you have confirmed that hardware virtualization support is available and enabled, it's time to enable Hyper-V in the "Turn Windows features on or off" dialog which you can launch by typing "turn windows features" at the Start Screen and then selecting "Settings" in the right-hand pane.

Hyper-V Turn Windows features on or off
Hyper-V Enabled in Windows Features

If Hyper-V wasn't enabled previously, you'll need to reboot after applying this change. After enabling Hyper-V it's a good idea to configure networking for the Hyper-V environment. In order to support external network connections, you'll need to make sure that a virtual switch has been created and is functional. To get started with this, open the Virtual Switch Manager which you'll find on the Actions panel in the Hyper-V Manager (Type Hyper-V at the Start Screen to find the Hyper-V Manager).

Hyper-V Manager Blank

After clicking on "Virtual Switch Manager" in the Actions pane ensure that "External" is highlighted, and then click on the "Create Virtual Switch" button.

Hyper-V Virtual Switch Manager Crop

If you have more than one NIC in your system, ensure that you have selected the NIC to be used for VM external network connections. Here's the settings that I used on my HP Z820:

Hyper-V Virtual Switch Manager Create Virtual Switch Crop

While there are many other options and features that you can configure, this is a good starting point and is all I needed to do to start creating and using VMs on Window 8 Pro.

Creating VMs

Since Client Hyper-V is fully compatible with Hyper-V on Windows Server you can use VMs and Vitual Hard Drives (VHDs) created on Windows Server Hyper-V machines. Creating a new VM it's easy. In this section I'll outline the process to create a VM from scratch using PXE boot from the network. You can also very easily perform OS installs from USB drives or optical media like DVDs.

To create a VM you just click on "New Virtual Machine…" under "Actions" on the right panel in the Hyper-V Manager. When you do this, the "New Virtual Machine Wizard" will launch. The first task is to choose a VM name and optionally specify a path for the VM file:

Hyper-V New Virtual Machine Wizard 2 crop

Next, you'll decide how much memory to allocate. I chose to use the default of 512 MB to see how my VM would perform with default memory settings. I know that I can always change this later.

Hyper-V New Virtual Machine Wizard 3 crop

After that you'll need to select a virtual switch if networking is needed. In this case I chose the virtual switch that I created in the earlier steps outlined in this post.

Hyper-V New Virtual Machine Wizard 4 crop

The next step is to setup the VHD for the VM being created. Here you have the option to create a new VHD, use an existing VHD, or to postpone VHD configuration for later:

Hyper-V New Virtual Machine Wizard 5

I chose to have a VHD created using the default settings. Note that you want to think about where the VHD file is stored during this process. I like to keep VM files and VHD files in the same directory for most configurations.

After clicking "Finish" I had one important step that's required to enable PXE boot from VMs. in This last step was to create a Legacy Network Adapter in VM settings. To do this, you launch the settings dialog for the VM that needs network boot support, and then click "Add Hardware" which is the top item in the left pane.

Hyper-V VM Settings Add Legacy Adapter crop

All you need to do is click the "Add" button and then ensure that the proper virtual switch is used. That's it! It only takes a couple minutes to perform all of these steps once you know what to do. This VM was now ready for PXE boot and the installation of the OS.

After clicking the green "Start" button for your VM in the right pane of the Hyper-V Manager, you'll then see the familiar PXE boot menu where you can press F-12 for a network boot:

Hyper-V Network Boot

This works just like network booting from a physical machine. I used network boot to kick off a clean Windows 7 install as you can see here:

Hyper-V Windows 7 Install WinPE

Having VMs for different operating systems is great when you need to test software on different operating systems, run machine images that are isolated from the network, test upgrade scenarios, and many more activities. Just remember to save snapshots for key configurations so that you can revert machine state when needed.

Connecting to VMs on Windows 8

Once you have your VMs setup there are two great options for interacting with and connecting to your VMs: the Hyper-V Manager, or Remote Desktop Connection using Remote Desktop Protocol (RDP).

Using the Hyper-V manager you'll be able to control your VM state in the Virtual Machine Connection Window but you'll have a more limited experience (max screen resolution is 1600x1200, less seamless keyboard/mouse interactions, etc).

Here's my clean Windows 7 VM running via the Hyper-V Virtual Machine Connection window: (click/tap to enlarge)

Hyper-V Windows 7 Desktop

For comparison, I connected to this VM via Remote Desktop Connection on a display running WQHD screen resolution (2560x1440) which you can see here: (click/tap to enlarge)

Hyper-V RDP Windows 7 WQHD

When using Remote Desktop with Hyper-V the keyboard and mouse work seamlessly just like they do in any other Remote Desktop session. The only downside is that you don't have the VM state management controls in this view. If you're running RDP on the same Windows 8 machine as your Hyper-V Manager, you can always switch over between the Hyper-V Manger and the Remote Desktop session and have the best of both worlds.

For fun, I found an old Windows 3.51 VHD file that was last modified in 2003 and created a VM in the Hyper-V manager to run it. Remember the "stabbing fingers" animation at the log on screen? Good times…

Hyper-V Windows NT 3.51 logon

There are many different ways to create VMs, and in this post I've illustrated how easy it is to get started with Hyper-V on Windows 8. There are a lot of powerful tools for managing Hyper-V on Windows 8 including the same PowerShell management capabilities that exist on Windows Server! If you want to know more, please refer to the resources below.

16 Sept 2013

Windows Server 2012 features

Opinion Microsoft's Windows Server 2012 is out. For many systems administrators, the question about this latest iteration of Microsoft's server family is not "What's new?" but "Why care?"

Server 2008 R2 is a great operating system, while Server 2012 bears the stigma of Metro and the Windows 8 controversy. But the answer to "why care" is simple: Server 2012 is as big a leap over 2008 R2 as 2008 R2 was over 2003.

Server 2012 comes with some great new features. It also refines previous versions of Server to bring it past the "never use version 1.0" stage and up to parity, in features and stability, with competing offerings.

In short Windows Server 2012 kicks ass. Here are the top 10 reasons why.

1. SMB 3.0
SMB 3.0 is the crown jewel of Server 2012. It is far removed from its laughingstock predecessor CIFS. It supports multiple simultaneous network interfaces – including the ability to hot-plug new interfaces on the fly to increase bandwidth for large or complex transfers – and supports MPIO, thin provisioning of volumes and deduplication (assuming the underlying storage is NTFS).

SMB 3.0 also supports SMB Direct and remote direct memory access, the ability for appropriately kitted systems to move SMB data directly from one system's memory to the other, bypassing the SMB stack. This has enabled Microsoft to hit 16GBps transfer rates for SMB 3.0, a weighty gauntlet for any potential challenger to raise.

2. NFS 4.1
Microsoft's NFS 4.1 server is good code. Designed from the ground up it is is fast, stable and reliable. It makes a great storage system for heterogenous environments and a wonderful network storage point for VMware servers.

3. iSCSI
With Windows Storage Server 2008, Microsoft first made an iSCSI target available. It eventually became an optional download from Microsoft's website for Server 2008 R2 and is now finally integrated into Server 2012 as a core component.

4. Hyper-V Replica
Hyper-V Replica is a storage technology designed to continuously replicate your virtual machines across to a backup cluster. It ensures that snapshots no more than 15 minutes old of your critical virtual machines are available over any network link, including the internet.

It replicates the initial snapshot in full – after that it sends only change blocks – and it fully supports versioning of your virtual machines.

5. Hyper-V 3.0
Server 2012 sees Hyper-V catch up with VMware's mainstream. While objectively I would have to say that VMware retains the feature lead at the top end, when combined with System Center 2012, Hyper-V 3.0 will cheerfully handle two-sigma worth of use cases.

Microsoft is no longer an also-ran in the virtualisation space; it is a capable and voracious predator stalking the wilds of the data centre for new prey.

Microsoft's Hyper-V Server – a free Windows Core version of Hyper-V – is feature complete. If you have a yen to dive into PowerShell then you can run a complete 64-node, 8,000 virtual machine Hyper-V cluster without paying Microsoft a dime.

It takes a very special kind of masochist to do so – Microsoft is betting you will spend the money on System Center 2012 and it is probably right. System Center 2012 is amazing, even more so with the newly launched Service Pack 1.

Microsoft's focus on PowerShell and its decision to put price pressure on VMware with Hyper-V server has opened up a market for third-party management tools such as 5Nine. These are not nearly as capable as System Center, but offer a great mid-point between free and impossible to manage and awesome but too expensive. This emerging ecosystem should see Hyper-V's market share explode.

6. Deduplication
For years now, storage demand has been growing faster than hard drive density. Meeting our voracious appetite for data storage has meant more and more spindles, and more controllers, chassis, power supplies, electricity and cooling to keep those spindles spinning.

Deduplication has moved from nice to have to absolute must in recent years and Microsoft has taken notice. Server 2012 supports deduplication on NTFS volumes – though tragically it does not work with CSV – and deeply integrates it with BranchCache to save on WAN bandwidth.

7. Cluster Shared Volumes
With Server 2012 Cluster Shared Volumes are officially supported for use beyond hosting virtual hard disks for Hyper-V. You may now roll your own highly available multi-node replicated storage cluster and do so with a proper fistful of best-practice documentation.

8. DirectAccess
DirectAccess was a neat idea but it was poorly implemented in previous versions of Windows. Server 2012 makes it easier to use, with SSL as the default configuration and IPSec as an option. The rigid dependence on IPv6 has also been removed.

DirectAccess has evolved into a reasonable, reliable and easy-to-use replacement for virtual private networks.

9. PowerShell
PowerShell 3.0 is an evolution rather than a revolution. Having more PowerShell scriptlets is not normally something I would care about. That said, the 2012 line of products marks a revolution in Microsoft's approach to server management.

Every element of the operating system and virtually every other companion server, such as SQL, Exchange or Lync, are completely manageable through PowerShell. This is so ingrained that the GUIs are just buttons that call PowerShell scripts underneath.

PowerShell should be tops on this list but to make proper use of it, your Google-fu has to be strong. The official documentation is incomplete, Bing is still worthless for searching Microsoft's web estate and the golden examples for making use of PowerShell lie in the blogs maintained by Microsoft's staff.

Once you have assembled the list of scriptlets you need – printed, laminated and guarded by a fire elemental as in days of old – you can make the 2012 stack of Microsoft software sing. Thanks to PowerShell, Microsoft is ready to take on all comers at any scale.

10. IIS 8
IIS 8 brings Internet Information Services up to feature parity with the rest of the world, and surpasses it in places. More than a decade's worth of "you use Windows as your web server" jokes officially end here.

IIS 8 sports script precompilation, granular process throttling, SNI support and centralised certificate management. Add in a FTP server that finally, mercifully, doesn't suck (it even has functional login restrictions) and IIS 8 becomes worth the cost of the operating system on its own.

I have found Server 2012 to be worth the cost of the upgrade, even where I have the excellent Server 2008 R2 deployed. Given that I work with very limited IT budgets, that is a strong endorsement.

11 Sept 2013

Top five Windows 8 security features

As Microsoft prepares to release Windows 8.1 to manufacturing later this summer, the software maker has focused on the enterprise with its flagship desktop operating system. In an era when employees no longer exclusively use corporate-issued desktops and laptops, Windows 8.1 security is especially important. The following top five Windows 8.1 security features address the protection of devices and enterprise data.

Remote business data removal: This Windows 8.1 feature allows administrators to perform a partial wipe of PCs participating in bring-your-own-device (BYOD) programs.

Some devices running Windows 8.1 may not be owned by the company and may contain personal data not needing corporate safeguards.Data can be classified as "corporate" or "user" to partition information that should or shouldn't be involved in wipe requests. Administrators can also classify data to be encrypted, as well as whether certain data should be removed from a device when the user's employment or contractual relationship with any entity has ended. IT can also use the Exchange ActiveSync protocol to instruct Windows to wipe corporate data either by destructive rewrites or simply marking the data as "inaccessible," but not deleting it.

Workplace Join: You can think of this Windows 8.1 feature as essentially a domain join "lite." In Workplace Join, a device owner subscribes his or her computer to a set of security policies that allow a Windows Server 2012 R2-based domain to control the presence of certain data and, perhaps most importantly, perform a limited wipe as described above.

Users who join a workplace can use their domain accounts to access published resources on the network, such as file shares and applications, without giving domain administrators total control over their device. Domain admins can now apply minimum standards for access to sensitive resources without allowing anonymous access with no control. Workplace Join attempts to strike a balance between information integrity and the sovereignty of personally owned devices.

Assigned Access: This new feature in Windows 8.1 is really designed for kiosk, call center or academic environments (and for home users on the consumer side). It locks a workstation down to one Windows Store-based application and actively prevents the user from accessing any other application or part of the system.

This unfortunately does not work with desktop-based applications, so its use in enterprise settings is currently pretty limited. As your organization develops Windows Store apps for internal use, you might find this feature compelling as a security solution.

Biometric Folder and Authentication Security: With this feature, you give your device the finger, and it lets you in. Well, a fingerprint, to be more exact. Users can control access to specific folders based on a fingerprint rather than a password or smartcard, which significantly increases total system security.

Windows 8.1 works with a variety of fingerprint readers, both traditional swipe and newer touch-based readers that can detect a live human being's fingerprint as opposed to an emulated print that may be used nefariously.

But the story extends to organization control -- administrators can control Windows 8 Pro and Enterprise editions so fingerprint authentication is required before personal certificates are used in transactions as well, and they can also control access to Windows Store (originally Metro) apps.

Windows 8.1 Device Encryption: Much has been written about the easy-to-use and fully secure BitLocker drive-encryption product, which was first introduced with Windows Vista, but improved over each successive version of the operating system.

In the Windows 8.1 era, Windows RT tablets are automatically encrypted when a Microsoft account is used for login. And, for the first time, all editions of Windows -- Windows RT, 8.1, Consumer or Enterprise -- can use BitLocker.

On Windows 8.1 Pro and Enterprise devices, you get additional configuration flexibility through more Group Policy options than those that existed in Windows 8. In addition, devices that use Microsoft's connected standby feature, which leaves devices almost off, but maintains their persistent connection to the Internet -- have their data encrypted while at rest to prevent unauthorized access to unattended devices.

Windows Server on desktop hardware

The conventional wisdom about the server and desktop SKUs of Windows is that they're meant to be used on different kinds of hardware. That said, there's nothing stopping you from installing a server edition of Windows on desktop hardware.

As long as the computer in question meets the minimum hardware requirements for Windows Server, it will install and run. The question is: Why would you want to do this, and will it run well?

Benefits of Windows Server on a desktop

So why install Windows Server on desktop hardware? Even in this age of virtual machines (VMs), there are a number of reasons that come to mind. The most common is the simple availability of the hardware. Desktop machines are cheap and plentiful, and repurposing yesterday's desktop as today's server (albeit a low-traffic one) is one way to keep such a machine out of a landfill. Plus, it's sometimes more convenient to have a server, especially an experimental one, running on its own hardware rather than in a VM.

Given all this, what kind of desktop system would be suitable for running Windows Server? Microsoft lists the system requirements for Windows Server 2012 as follows:

    A 1.4 GHz 64-bit processor
    512 MB of RAM
    32 GB of disk space
    Optical drive
    Keyboard, mouse, and 800x600 or better display hardware
    Internet connectivity

None of this is out of the gamut even for desktops that are a few years old. Windows Server doesn't require multiple cores, for instance, so even a single-core processor can be used.

Don't expect the same performance

Even if the baseline requirements for Windows Server are still quite low, a number of other issues come up that are specific to server environments:

Desktop systems generally don't support multi-socket configurations. If you're doing anything that requires multiple sockets (as opposed to multiple cores), don't expect desktop hardware to do the job. Multiple sockets did show up in the earlier days of high-end workstations, but they have since been eclipsed by single-socket/multi-core configurations.

Don't expect single-core systems to perform well as servers. If you're repurposing a low-end desktop (low-end by today's standards, that is) that has a single-core processor, don't expect anything like true server performance. Almost every server application needs multiple cores to run well.

NUMA isn't supported in desktops either. Non-uniform memory access (NUMA) or hot-pluggable memory isn't something you see in a desktop setting, either. If you're doing anything that requires NUMA or is designed to test it, odds are you won't be able to run Windows Server on a computer properly.

Desktop storage is definitely not server storage. Desktop 7200 RPM drives are no competition for 10K RPM drives, let alone multidrive arrays. The exception to this rule is if you're using desktop-grade flash storage: While it doesn't provide a lot of space, it does provide suitably snappy I/O.

Networking on desktops is not designed for server loads. It's tempting to think a NIC is a NIC is a NIC, but there are real differences between the NICs designed for servers and those for desktops. If you use an add-in NIC that was designed for server use, that will help a bit, but keep in mind that there may be plenty of other bottlenecks in the system that slow things down.

Microsoft virtualization technology may not work. Microsoft's Hyper-V hypervisor, integrated into Windows Server, has specific hardware requirements. Some desktop-level CPUs may not have the processor extensions that Hyper-V needs. What's more, a desktop-class machine may not be able to support the sheer amount of memory needed to run Hyper-V well. If you're running more than one VM in Hyper-V, it helps to have more than 4 GB of RAM to throw at the problem. The older the desktop-class system, the lower the physical limit for memory that can be added to the machine.

The key thing to keep in mind if you want to repurpose desktop hardware for server use is what the application will be. A desktop system can work fine as a file-and-print server, a low-volume database server or perhaps as a Web server for in-house programs such as SharePoint. Don't expect to use such a machine for anything that will satisfy the kind of demand you would put on a "real" server.

2 Sept 2013

Virtual memory settings in Hyper-V Dynamic Memory

Dynamic Memory in the upcoming Hyper-V R2 Service Pack 1 removes the guesswork from memory allocation. While VMware's memory overcommit feature assigns virtual memory automatically, Hyper-V Dynamic Memory lets you adjust virtual memory settings. This flexibility provides important constraints for the host and enables you to allocate memory with greater precision and ease.

With Hyper-V Dynamic Memory, the host automatically rebalances memory among virtual machines (VMs) on a moment-by-moment basis. Memory is pooled on one physical host, then it is dynamically distributed to VMs as necessary. But there's a problem with this memory allocation: Hyper-V assigns the memory in one-second intervals.

In computer time, one second is a remarkably long period. In a second or two, a VM's memory requirements can change drastically. During this interim, a host can remove memory from a VM that suddenly requires more. For this reason, Hyper-V Dynamic Memory includes virtual memory settings -- known as Memory Buffer and Memory Priority -- to control a host's behavior and improve virtual memory management.

The Memory Buffer setting provides a VM with greater memory capacity than it needs. The Memory Priority setting allows you to indicate which VMs should receive memory first if a shortage occurs. You can find both controls in the virtual memory settings of any VM in the box marked Memory Management.

The Memory Buffer setting

The Memory Buffer is essential to Hyper-V's virtual memory settings. It reserves additional space on each VM. The machine uses this space if RAM requirements change between one-second intervals. Without a buffer, an increase in memory requirements during this mere second automatically forces a VM into an out-of-memory condition and memory pages are swapped to disk. But you don't want swapping to disk to happen ever. It drastically drains performance as the computer converts high-speed memory data into comparatively low-speed disk data.

The Memory Buffer setting is configured on a per-VM basis. Memory Management features a slider for increasing and decreasing the percentage of memory that's reserved as a buffer. This additional memory scales with whatever amount a Hyper-V host has assigned to that VM at that second.

So, for example, let's say you configured a VM to reserve 10% of memory as a buffer. Let's also say that at a particular moment this VM reports that it needs 1,000 MB of memory. In this case, the Hyper-V host actually assigns 1,100 MB to the VM. A few seconds later, a VM might report that it now needs 1,500 MB of memory. Then, the Hyper-V host will assign it 1,650 MB. Remember, a host reserves memory according to whatever percentage you assign in virtual memory settings.

Obviously, adding more reserved memory decreases the chance that a VM needs to swap to disk should a change in memory requirements occur. At the same time, a larger buffer results in wasted memory. The reserved memory is always available, so it sits unused unless a VM needs it. The setting lets you reserve anywhere from 5% to 95%, giving you a wide range of options. A good rule of thumb for this virtual memory setting is to start small. You can always inch the slider upwards if you observe your VM regularly swapping to disk.

The Memory Priority setting

Another important tool for virtual memory management is the Memory Priority configuration. Hyper-V Dynamic Memory constantly rebalances the memory, but sometimes there's not enough memory to meet every VM's needs. Perhaps you powered on one too many machines. Or a VM might suddenly need a huge amount of memory to process a certain thread.

When such memory contention occurs, the Memory Priority setting allows you to prioritize VMs. You essentially give Hyper-V an ordered list of VMs from which it should pull memory first. Lower-priority VMs lose memory before higher-priority VMs during memory allocation.

If the host consults this virtual memory setting, that means you're in a low-memory situation. The host consults Memory Priority only when the total memory available for distribution has run out. So the vast majority of the time, Hyper-V won't consult it. But if it does, any VM that has its memory reduced will be forced to page to disk, which significantly reduces performance. You should organize this list to protect your highest-priority VMs, but also configure your Hyper-V hosts so that they don't need to use Memory Priority much, if at all.

These virtual memory settings can ease virtual memory management and help troubleshoot memory contention problems. You can play with the Hyper-V Dynamic Memory settings to determine which configurations best fit your memory allocation needs.

Monitor Virtual Memory

Allocating and monitoring virtual memory can be a challenge in server virtualization environments, but there are different ways to address these issues. The Dynamic Memory feature in the upcoming Microsoft Hyper-V R2 Service Pack 1 includes virtual memory settings to help administrators allocate memory with greater precision.

Hyper-V Dynamic Memory makes virtual memory management even easier by providing constraints for the host's memory allocation behavior. To help you monitor virtual memory, Hyper-V's Manager Console provides information about the memory levels of each virtual machine (VM).

With Hyper-V Dynamic Memory, the host simply adds memory to VMs that need it and removes memory from those that don't. This memory allocation process works great -- as long as there's enough memory to go around. But if a Hyper-V host runs out of memory, it's forced to remove memory from VMs that need it. This puts the affected VM in an out-of-memory condition and memory pages are swapped to disk, which drastically drains performance. Fortunately, the Hyper-V Manager Console includes two new reports to help you monitor virtual memory settings and avoid a negative memory state.

Monitoring virtual memory in Hyper-V

The Manager Console keeps you informed of VM memory levels. You can use the Current Memory and Memory Available reports to monitor virtual memory and prevent VMs from experiencing negative memory. These columns provide data on the quantity of memory that has been assigned to each VM at any point in time. Because Hyper-V rebalances memory among VMs every second, you'll generally see the values in these columns updating at around that rate.

The Current Memory report gives you an absolute value of assigned memory. This value will increase as VMs' memory requirements increase, and decrease as needs decrease.

Slightly more interesting is the column marked Memory Available. Recall that the Hyper-V Dynamic Memory virtual memory settings let you reserve a percentage of additional memory as a memory buffer on each VM, in case the memory requirements change between one-second intervals. In a healthy system, the memory available value should be close to the percentage you've set for the VM's memory buffer. This is because Memory Available reports the percentage of memory currently available to the machine as compared to the total memory assigned to the VM at that moment.

As you can imagine, it's possible for this value to drop below zero. When Memory Available shows a negative value, it means that the host has given a particular VM less memory than it needs.

That's when a VM will page to disk to keep running. To prevent this negative memory state, your choices are limited. Your first option is to power down one or more VMs. Powering them down frees the memory balancer of each VM's memory requirements, possibly providing the host with enough total memory to move those values into the positive. Another option with similar results is to perform a live migration of a VM to another host. In the virtual memory settings, you can also limit the maximum amount of memory one or more VMs can reserve by reducing their configured maximum RAM values. A fourth option is to just add more physical RAM to the host.

Hyper-V Dynamic Memory can indeed create these painful situations, but they are more likely to happen when you haven't planned or monitored your infrastructure carefully. Hyper-V Dynamic Memory makes memory allocation more precise with its virtual memory settings, but you need to monitor virtual memory levels to make sure your hosts always have enough memory to go around.

Microsoft Hyper-V Snapshot

Snapshots are an important part of a Hyper-V administrator's toolkit. A Hyper-V snapshot allows an admin to roll back to a previous state, undoing changes and potentially saving recovery time. However, if you don't know how to use them properly, you could be setting yourself up for disaster and inadvertently slowing VM performance. Before you start taking virtual pictures of your own, make sure you know when and how to use a Hyper-V snapshot.

What is a Hyper-V snapshot?


While it seems like a simple question, defining what a Hyper-V snapshot is can be tricky. The confusion stems from inconsistent terminology and naming conventions, which seem to be an all-too-common problem with IT lingo. The most important thing is not to confuse a Volume Shadow Copy Service (VSS) snapshot with a Hyper-V snapshot. While both are used to restore a virtual machine (VM) to a prior state, they work in fundamentally different ways. VSS operates on the block level of the file system and only backs up disk information, while a Hyper-V snapshot captures disk and memory information of a VM by creating a separate automatic Virtual Hard Disk (.AVHD) file to track changes. Unlike VSS snapshots, Hyper-V snapshots aren't the answer to backups or disaster recovery. Instead, you should use them as a troubleshooting option, should a configuration change or patch update cause problems.

How do Hyper-V snapshots work, and what are they used for?

When you create a Hyper-V snapshot, you're actually creating a differencing disk. A differencing disk stores changes, or differences, that would otherwise be written to the original virtual hard disk. As the VM continues to change, the differencing disk grows in size. VSS snapshots differ because they create a copy of the disk image at a specific time. Think of a Hyper-V snapshot like an "undo" option, rather than an independent backup.

Hyper-V snapshots are extremely useful for quickly reverting a VM back to a previous state, after an administrator realizes that an update failed or that a configuration change caused unexpected problems. The downside is that, if left unchecked, those expanding differencing disks can slow VM performance and fragment a hard drive.

How do I take a Hyper-V snapshot?

Many admins choose to create snapshots right before making any potentially risky change to a VM's state. The actual act of creating a Hyper-V snapshot from the Hyper-V Manager in the Microsoft Management Console is a simple process: Simply right-click on a VM and select Snapshot. The time it takes to create a snapshot will depend on many factors, including the size of the VM and the disk's overall I/O load.

It's important to delete and merge older snapshots that can continue to grow and hurt performance. In Windows Server 2008 R2, completely deleting the snapshot means shutting down the VM.

What changed with Hyper-V snapshots in Windows Server 2012?

Snapshots created in Windows Server 2012 can be merged and completely deleted without shutting down the VM. This live merging of snapshots can save administrators time and aggravation by allowing them to perform updates to VMs without the risk of secondary downtime (once to perform the update and again to delete the snapshot). This new feature brings Hyper-V snapshots more in line with VMware snapshots, which do not require the shutdown of a VM to merge snapshots.

VMware and Hyper-V virtual machine disaster recovery

What qualifies as a disaster can be defined widely, but in this article it refers to any risk of service outage due to hardware, software or environmental failure and the process of managing that outage.

Specifically, this article will cover virtual machine (VM) disaster recovery using the VMware vSphere and Microsoft Hyper-V platforms.

Within their products, VMware and Microsoft provide the ability to cater for multiple disaster scenarios other than total site loss.

Most virtual machine disaster recovery (DR) products need additional hardware at the local or remote sites, and in some cases will require shared storage. With some careful planning, administrators can integrate these products into their virtual server designs that provide effective business continuity and so mitigate against failure.

Disaster recovery basics – backup and replication

Ensuring business continuity typically takes one of two forms:
  • Data and system backups to tape or disk that enable the recovery of entire systems onto new hardware, either rebuilt or at a new location
  • Real-time replication of data to a new location with hardware ready and waiting, using replication technology and a wide area network. Unsurprisingly, data replication is a more expensive option and typically may only be used for important production systems.
In the pre-virtual world, backup was achieved using backup software agents installed onto each server, with backup data taken offsite manually or written offsite across the network.

Replication was typically done at the storage array, using technologies such as SRDF from EMC or TrueCopy from Hitachi, but it was also possible to replicate data at the server or application, using, for example, Oracle's Data Guard.

Array-based replication works better in large-scale environments where the complexity and time required to restore individual servers means recovery time objectives (RTO) cannot easily be met.

Virtual machine backup and replication

Server virtualisation introduces new challenges in implementing disaster recovery policies.

The traditional backup process doesn't work well for virtual server backup. A virtual environment uses shared hardware, such as network and storage ports, and achieves cost savings by virtue of the fact that most server hardware is underutilised.

During the backup window in traditional environments, the aim is to back up data as quickly as possible, and that means using all the network capacity available. The result is that traditional backup methods can cause bottlenecks and performance issues in virtual deployments.

Data replication has similar challenges. The recommended configuration for both vSphere and Hyper-V involves creating large volumes (or LUNs) within the storage array and storing multiple VMs on each of them.

This means servers on the same LUN are grouped together for DR purposes as only an entire LUN is replicated and failed over by the storage array. Administrators therefore have to think through carefully any array-based data layout to achieve a compromise between space utilisation and disaster recovery flexibility.

Adding intelligence – hypervisor-based solutions

Hypervisor suppliers have recognised the issue of managing DR for virtual environments and added features to their products to address these issues.

First we'll talk about VMware features, then those of Hyper-V.

VMware, VADP and VDP

VStorage APIs for Data Protection (VADP) is an API framework that provides a set of features for managing virtual machine backups. It supersedes VCB (VMware Consolidated Backup) an early VMware backup feature and is an integral part of the hypervisor itself.

VADP allows backup software suppliers to interface with a vSphere host and back up entire virtual machines, either as a full image or incrementally, using Changed Block Tracking (CBT).

CBT provides a high level of granularity in tracking the changes applied to a virtual machine, in a similar way to traditional backups that look for changed files.

VADP can also integrate with VSS (Volume Shadow Copy Services) on Windows Server 2008 and upwards, ensuring host consistency during the backup process, rather than the standard "crash copy" style backup where no synchronisation takes place.

VDP, or vSphere Data Protection, is VMware's virtual appliance for backups. This uses EMC Avamar to store backups on disk, taking advantage of features such as data deduplication to improve space utilisation.

Many third-party suppliers also support VADP, including Symantec, both NetBackup and Backup Exec product lines; Veeam; CommVault; Arkeia; HP, with Data Protector; and EMC, with Avamar and Networker.

VMware Fault Tolerance

Fault Tolerance is a vSphere feature that ensures virtual machine availability in the event of a hardware failure. Fault Tolerance works by maintaining a second "shadow" copy of a virtual machine, which is continually kept in-sync and up to date with the primary.

In the event of a disaster, such as the loss of hardware or power to the primary systems, Fault Tolerance automatically starts the secondary server with no downtime or outage.

Fault Tolerance is best suited to implementing local DR recovery, where the outage does not affect all of the local systems and where a recovery point objective (RPO) of zero is required.

This could mean implementing two sets of hardware, separated by physical and power boundaries, for example. Extending the primary and secondary systems over any great distance could, however, introduce issues of latency in keeping the secondary copy up to date.

VMware Replication

vSphere's Replication feature uses the change block tracking feature to replicate data to a remote site for disaster recovery purposes. Data is moved at the virtual machine level (the VMDK) and so is independent of the underlying storage. This is a good solution for replicating data between different array types where the DR site is deployed on less expensive hardware.

Replication is implemented using a dedicated virtual appliance at the source site, plus replication agents on each VM in the replication process. This makes it more invasive than an array-only replication solution.

Replication can be used in conjunction with VMware's SRM (Site Recovery Manager) to provide a comprehensive DR management solution that covers the process of failover and failback in the result of a DR incident.

VMware Replication alternatives

There are a number of third-party alternatives to the native VMware Replication feature, all using the same underlying interface.

Zerto offers Virtual Replication, a product that provides the ability to fully manage the disaster recovery process of virtual machines, including replicating into public cloud as a DR target. Meanwhile, VirtualSharp, acquired by PHD Virtual Technologies, provides DR Assurance via its PHD Virtual Reliable DR solution. This enables testing and validation of disaster recovery scenarios on a regular basis to ensure configurations are correct at the time of an actual disaster.

Hyper-V Cluster Shared Volumes

In Windows Server 2008 R2, Microsoft introduced the concept of Cluster Shared Volumes. These are shared storage volumes that are accessible by all Hyper-V nodes in a shared cluster environment. In the event of a failure of a node, another node in the cluster can take over the virtual machines of the failing server.

Cluster Shared Volumes are good for local DR where hardware can be separated by physical and power boundary failure domains, but is not well suited for distance DR due to the latency any extended distance would produce.

Hyper-V Replica

With the release of Windows Server 2012 and Hyper-V 3.0, Microsoft introduced a new feature called Replica. This allows asynchronous replication of a virtual machine over distance to a secondary site.

In the initial deployment, the replication interval is fixed, but with the release of Windows Server 2012 R2 due this year, the interval will be user configurable.

In addition, Microsoft will add extended site support to enable replication to a third site. This provides the possibility to have a closely located and a remote replica over a greater distance, providing more DR flexibility.