Once Replica is enabled, the source host starts to maintain a HRL (Hyper-V Replica Log file) for the VHDs. Every 1 write by the VM = 1 write to VHD and 1 write to the HRL. Ideally, and this depends on bandwidth availability, this log file is replayed to the replica VHD on the replica host every 5 minutes. This is not configurable. Some people are going to see the VSS snapshot (more later) timings and get confused by this, but the HRL replay should happen every 5 minutes, no matter what.
Normally take place every 5 minutes. Sometimes the bandwidth won't be there. Hyper-V Replica can tolerate this. After 5 minutes, if the replay hasn't happened then you get an alert. The HRL replay will have another 25 minutes (up to 30 completely including the 5) to complete before going into a failed state where human intervention will be required. This now means that with replication working, a business could lose between 1 second and nearly 1 hour of data.
Most organisations would actually be very happy with this. Novices to DR will proclaim that they want 0 data loss. OK; that is achievable with EUR100,000 SANs and dark fibre networks over short distances. Once the budget face smack has been dealt, Hyper-V Replica becomes very, very attractive.
That's the Recovery Point Objective (RPO – amount of time/data lost) dealt with. What about the Recovery Time Objective (RTO – how long it takes to recover)? Hyper-V Replica does not have a heartbeat. There is not automatic failover. There's a good reason for this. Replica is designed for commercially available broadband that is used by SMEs. This is often phone network based and these networks have brief outages. The last thing an SME needs is for their VMs to automatically come online in the DR site during one of these 10 minute outages. Enterprises avoid this split brain by using witness sites and an independent triangle of WAN connections. Fantastic, but well out of the reach of the SME. Therefore, Replica will require manual failover of VMs in the DR site, either by the SME's employees or by a NOC engineer in the hosting company. You could simplify/orchestrate this using PowerShell or System Center Orchestrator. The RTO will be short but have implementation specific variables: how long does it take to start up your VMs and for their guest operating systems/applications to start? How long will it take for you to get your VDI/RDS session hosts (for remote access to applications) up, running and accepting user connections? I'd reckon this should be very quick, and much better with the 4-24 hours that many enterprises aim for. I'm chuckling as I type this; the Hyper-V group is giving SMEs a better DR solution than most of the Fortune 1000's can realistically achieve with oodles of money to spend on networks and storage replication, regardless of virtualisation products.
A common question I expect: there is no Hyper-V integration component for Replica. This mechanism works at the storage level, where Hyper-V is intercepting and logging storage activity.
Replica and Hyper-V Clusters
•Standalone host to cluster•Cluster to cluster•Cluster to standalone host
The tricky thing is the configuration replication and smooth delegation of replication (even with Live Migration and failover) of HA VMs on a cluster. How can this be done? You can enable a HA role called a Hyper-V Replica Broker on a cluster (once only). This is where you can configure replication, authentication, etc, and the Broker replicates this data out to cluster nodes. Replica settings for VMs will travel with them, and the broker ensures smooth replication from that point on.
Configuring Hyper-V Replica
Here the fundamentals:
On the replica host/cluster, you need to enable Hyper-V Replica. Here you can control what hosts (or all) can replicate to this host/cluster. You can do things like have one storage path for all replicas, or creating individual policies based on source FQDN such as storage paths or enabling/pausing/disabling replication.
- Authentication: HTTP (Kerberos) within the AD forest, or HTTPS (destination provided SSL certificate) for inter-forest (or hosted) replication.
- Select VHDs to replicate
- Destination
- Compressing data transfer: with a CPU cost for the source host.
- Enable VSS once per hour: for apps requiring consistency – not normally required because of the logging nature of Replica and it does cause additional load on the source host
- Configure the number of replicas to retain on the destination host/cluster: Hyper-V Replica will automatically retain X historical copies of a VM on the destination site. These are actually Hyper-V snapshots on the destination copy of the VM that are automatically created/merged (remember we have hot-merge of the AVHD in Windows 8) with the obvious cost of storage. There is some question here regarding application support of Hyper-V snapshots and this feature.
Initial Replication Method
- Over-the-wire copy: fine for a LAN, if you have lots of bandwidth to burn, or if you like being screamed at by the boss/customer. You can schedule this to start at a certain time.
- Offline media: You can copy the source VMs to some offline media, and import it to the replica site. Please remember to encrypt this media in case it is stolen/lost (BitLocker-To-Go), and then erase (not format) it afterwards (DBAN). There might be scope for an R2/Windows 9 release to include this as part of a process wizard. I see this being the primary method that will be used. Be careful: there is no time out for this option. The HRL on the source site will grow and grow until the process is completed (at the destination site by importing the offline copy). You can delete the HRLs without losing data it is not like a Hyper-V snapshot (checkpoint) AVHD.
- Use a seed VM on the destination site: Be very very careful with this option. I really see it as being a great one for causing calls to MSFT product support. This is intended for when you can restore a copy of the VM in the DR site, and it will be used in a differencing mechanism where the differences will be merged to create the synch. This is not to be used with a template or similar VMs. It is meant to be used with a restored copy of the same VM with the same VM ID. You have been warned.
And that's it. Check out the social media and you'll see how easy people are saying Hyper-V Replica is to set up and use. All you need to do now is check out the status of Hyper-V Replica in the Hyper-V Management Console, Event Viewer (Hyper-V Replica log data using the Microsoft-Windows-Hyper-V-VMMS\Admin log), and maybe even monitor it when there's an updated management pack for System Center Operations Manager.
Failover
- Planned: You are either testing the invocation process or the original site is running but unavailable. In this case, the VMs start in the DR site, there is guaranteed zero data loss, and the replication policy is reversed so that changes in the DR site are replicated to the now offline VMs in the primary site.
- Unplanned: The primary site is assumed offline. The VMs start in the DR site and replication is not reversed. In fact, the policy is broken. To get back to the primary site, you will have to reconfigure replication.Can I Dispense With Backup?No, and I'm not saying that as the employee of a distributor that sells two competing backup products for this market. Replication is just that, replication. Even with the historical copies (Hyper-V snapshots) that can be retained on the destination site, we do not have a backup with any replication mechanism. You must still do a backup, as I previously blogged, and you should have offsite storage of the backup.Many will continue to do off-site storage of tapes or USB disks. If your disaster affects the area, e.g. a flood, then how exactly will that tape or USB disk get to your DR site if you need to restore data? I'd suggest you look at backup replication, such as what you can get from DPM:
- Set up a proof of concept with a temporary Hyper-V host in the client site and monitor the link between the source and replica: There's some cost to this but it will be very accurate if monitored over a typical week.
- Do some work with incremental backups: Incremental backups, taken over a day, show how much change is done to a VM in a day.
- Maybe use some differencing tool: but this could have negative impacts.
- Asynchronous broadband (ADSL): The customer claims to have an 8 Mbps line but in reality it is 7 Mbps down and 300kbps up. It's the uplink that is the bottleneck because you are sending data up the wire. Most SME's aren't going to need all that much. My experience with online backup verifies that, especially if compression is turned on (will consume source host CPU).
- How much bandwidth is actually available: monitor the customer's line to tell how much of the bandwidth is being consumed or not by existing services. Just because they have a functional 500 kbps upload, it doesn't mean that they aren't already using it.
Suggestion
No comments:
Post a Comment