31 Jan 2013

The best virtual server storage

Implementing a virtual server environment can help small- and medium-sized businesses (SMBs) consolidate IT resources and create a more flexible infrastructure. Choosing the right primary storage for your virtual server environment involves balancing performance, capacity, ease of use and cost.

Whether you plan to use VMware Inc.'s vSphere, Microsoft Corp's Hyper-V or Citrix Systems Inc.'s XenServer virtualization platforms, you'll need a storage system with sufficient performance capabilities and capacity to support the flexible environment virtual servers allow, but at the same time be manageable by a small IT staff and fit in an SMB budget. In this tutorial on virtual server storage, you will learn how to choose the best virtual server storage for your organization, the pros and cons of Fibre Channel vs. iSCSI, and about the other choices you have available.

GETTING STARTED WITH VIRTUAL SERVER STORAGE

"When planning out the storage side of any virtual environment, determine what kinds of systems you are going to virtualize because a lot of people don't take into consideration the performance and capacity requirements of those systems before building their VM [virtual machine] environments," said W. Matthew Ushijima, a senior consultant for GlassHouse Technologies Inc. "It is important to consider your performance needs from both hardware and software perspective."

Jeff Boles, a senior analyst and director of validation services for the Taneja Group, said that admins should conduct a thorough assessment of current I/O patterns, and then use analysis tools such as Virtual Instruments NetWisdom to determine physical environment storage and performance needs before moving to a virtual environment. Ushijima has also seen Solarwinds Inc.'s Storage Profiler and Aptare Inc.'s StorageConsole Virtualization Manager used to analyze capacity and performance needs.

"Don't virtualize and think you are going to make do with less storage in the process," Boles said. "When you get into more complex applications and more detailed planning, it's best to consult with your application vendors," Boles said.

Editor's Tip: For more information about managing storage in a virtual environment, check out sister site SearchStorage's podcast on storage management tools for virtual environments.

STORAGE ARCHITECTURE

For most environments, networked storage and a multipath architecture will be base requirements. Boles said networked storage will allow you to take full advantage of the flexibility benefits of virtual environments. "You want something that doesn't get in the way of that flexibility," Boles said. "You can move [virtual servers] back and forth between different pieces of hardware, re-provision them, copy them, all that kind of stuff, a lot more easily than in the physical world. You really need shared storage behind the virtual server infrastructure to take advantage of all its capabilities."

A multipath architecture -- having numerous redundant links between servers and storage so the data flow can be broken up into multiple streams and sent over separate links -- is necessary to keep your storage system available in case of a path failure. "In any shared storage environment, I consider [a multipath architecture] to be mission critical," Ushijima said. "I don't care if you are running four servers or 4,000. The last thing you want to do is have a server go down because it dropped its storage connection. The issue only multiplies exponentially for each server you add into a virtual environment."

FIBRE CHANNEL VS. iSCSI

Fibre Channel (FC) is a high-speed technology (up to 8 Gbps) that can be deployed over large distances, but requires relatively expensive equipment and specific administrator knowledge. iSCSI operates over ubiquitous Ethernet technology, but is subject to Ethernet's speed limitations. While 1 Gigabit Ethernet (GigE) is currently the most commonly deployed network technology, 10 Gigabit Ethernet (10 GigE) is available and offers a multiple-factor performance increase. However, a 10 GigE system is more expensive to install and may not be cost-effective for SMBs.

Both Boles and Ushijima recommend assessing your performance needs and available resources to determine which transport system to employ. For SMBs, Boles says iSCSI is the overwhelming leader because of its use of Ethernet and the fact that administrators don't need to have specific FC training to deploy and manage it. "Hands down, if you're looking at a new storage system, and you're an SMB customer, you should be considering iSCSI today," Boles said.

Dell Inc. is no stranger to the SMB market and is a leader in iSCSI storage systems, according to Boles. The company offers iSCSI storage area network (SAN) systems from the EqualLogic PS4000E for SMBs and branch offices to the PS6010XVS, which offers 10 GigE connectors and super-fast solid-state drives (SSDs).

NETWORK-ATTACHED STORAGE

If you have a predominantly network-attached storage (NAS) environment, it's likely an entry-level file-based system will have more built-in horsepower than an entry-level block-based storage system, Boles said. So you might be able to consolidate more workloads with a NAS system. However, because NAS systems are more widely shared, it might be harder to get a detailed view into which systems are using network resources and what your performance needs are, Boles explained.

Multiprotocol systems that serve both file- and block-based data can be a good fit for SMBs and are now widely available. For example, NetApp Inc.'s Data Ontap operating system supports network file system (NFS) and Common Internet File System (CIFS) NAS protocols, as well as block-based iSCSI and FC. EMC Corp.'s Celerra Unified Storage Platform supports NFS, CIFS, iSCSI, and FC.

SHARED SERIAL-ATTACHED STORAGE

A newer technology that is moving into the conversation is shared  serial-attached SCSI (SAS), an interface for connecting individual SAS storage disk drives to multiple servers. Shared SAS systems offer simplified management and lower costs, but are limited in distance and you must have SAS-supported servers or SAS adapters.

Dell is also a player in the shared SAS market. The PowerVault MD3200 SAS storage array has single or dual controllers and can scale up to 96 disk drives.

VMWARE VSTORAGE

If you plan on using VMware vSphere hypervisor, consider storage arrays that have integrated VMware's vStorage technology and application programming interfaces (APIs) into their functionality.

VMware's vStorage Virtual Machine File System (VMFS) is a cluster file system that allows simultaneous read and write file access so multiple VMs can access and store data in the same file. Without VMFS, only one server would be able to access a file at a time, increasing system latency and impairing productivity. Storage VMotion is another vStorage technology that allows you to migrate live virtual machine disk files (VMDK) between physical hosts for efficient storage I/O and capacity management.

VMware developed the vStorage APIs to improve the platform's integration with the feature-rich storage arrays and data protection products already on the market. "[The APIs] make use of some array side capabilities and optimize the communication and handling of data between the hypervisor and the array," said Boles.

The vStorage API for Array Integration (VAAI) brings advanced array side features into the hypervisor, including Changed Block Tracking, which allows the storage array to identify changed blocks within VMDK files so it can eliminate previously saved data before transmitting VMDK snapshots over the wire. Another example is VAAI Block Zero, which allows the provisioning of zero-data blocks for populating volumes on the array instead of having the hypervisor provision the zero-data blocks and sending them over the wire. VMware also developed APIs for data protection, multipathing and storage replication adapters.

FROM THE PHYSICAL TO THE VIRTUAL

When you are ready to select a storage system for your virtual environment, there are tools available to help you prepare a physical-to-virtual (P2V) migration strategy. "There are a lot of products from a variety of vendors that provide tools to scan your existing infrastructure and provide the ability to do virtual assessments," Ushijima said. Neither VMware nor Microsoft offers free P2V planners, although VMware offers P2V Accelerator Services as part of its professional services products.

5nine Software offers a P2V planner for both VMware vSphere and Microsoft Hyper-V platforms. The planner collects data about existing hardware in your data center, its utilization, and application workloads to prepare physical-to-virtual migration plans that take into consideration hardware and business requirements, storage performance and capacity needs, costs, and return-on-investment. 5nine Software offers a free edition with limited reporting and workload and cost optimization capabilities.

"Look for solutions from a networked storage vendor that can give you functionality at the VM level," said Boles. "A lot of the time when it comes to networked storage, you end up just parking a whole bunch of VMs on a LUN or out on a storage system and you have to figure out how to manage your storage without breaking the VMs, and whether you should do things inside the VMs or inside the storage. It's often very hard to connect those two. So look for solutions from your storage vendor that let you do advanced storage operations with your VMs."

Advanced operations include taking individual VM snapshots and recovering from those snapshots without impacting other VMs. Also look for replication capabilities so you have disaster recovery options now or down the road. "You'll get the most bang for your buck that way and extend your storage capabilities," Boles said.

iSCSI storage system

What you will learn in this tip: More small- and medium-sized businesses (SMBs) are discovering iSCSI is a good fit for their organizations. In this iSCSI primer, learn about what you need to get started with your first iSCSI storage system.

Some of iSCSI's popularity in SMBs has to do with server virtualization. And, right now, fault tolerance for virtualization hosts is a big factor that is pushing smaller shops into checking out iSCSI. In a virtual data center, it's imperative to prevent host servers from failing. If a host server were to fail, it would take all of the virtual machines (VMs) residing on it down, too. Since a single host server can contain a dozen or more VMs, a host server failure typically results in a major outage.

So what does this have to do with iSCSI storage systems? Well, to prevent the types of outages that I just described, many organizations cluster their virtualization hosts. That way, if a failure were to occur, the virtual machines can continue to run on an alternate host. Although there are a variety of host clustering technologies, host clustering typically requires the use of a shared storage pool that is accessible to all of the hosts within the cluster. The shared storage pool isn't connected to the cluster nodes through a disk controller. Instead, the cluster nodes communicate with the storage pool over the network through the use of the iSCSI protocol.

So what does this have to do with iSCSI storage systems? Well, to prevent the types of outages that I just described, many organizations cluster their virtualization hosts. That way, if a failure were to occur, the virtual machines can continue to run on an alternate host. Although there are a variety of host clustering technologies, host clustering typically requires the use of a shared storage pool that is accessible to all of the hosts within the cluster. The shared storage pool isn't connected to the cluster nodes through a disk controller. Instead, the cluster nodes communicate with the storage pool over the network through the use of the iSCSI protocol.

Hardware requirements for iSCSI storage systems

The only firm hardware requirement for using iSCSI is that there must be TCP/IP connectivity between the remote storage pool and the computer that needs to connect to it. Beyond that, it is widely considered to be a best practice to route iSCSI traffic over a dedicated, high-speed network connection so that iSCSI traffic won't be delayed by other network traffic, but this is not an absolute requirement. If you use a dedicated network connection (which I highly recommend), then you should use as high of a connection speed as possible. Using faster network connections between your server and your storage pool means lower latency, which is important, especially for I/O intensive applications.

Software requirements for iSCSI storage systems

To establish iSCSI connectivity, you are going to need a special type of software. The iSCSI storage array is usually a collection of disk resources that is physically attached to a Windows or a Linux server. This server runs iSCSI target software. Just as the shared storage pool requires specialized software, so does the server that connects to it. To establish a connection to an iSCSI target, a server must run an iSCSI initiator.

Establishing iSCSI connectivity


Once you have the iSCSI initiator and the iSCSI target software, the next step is to establish iSCSI connectivity. The exact procedure varies depending on the software that you are using. However, these are the five basic steps that are usually required:
  1. Configure the iSCSI target to make disk resources available as iSCSI storage. On a Windows Storage Server, this means you must create a virtual hard drive and associate it with a specific iSCSI target (you can create multiple targets on a single storage server). When you run the iSCSI initiator, the software will assign the server an iSCSI Qualified Name (IQN).
  2. Document this name and configure the iSCSI target to allow connectivity from that IQN.
  3. Configure any firewalls between the server and the shared storage pool to allow iSCSI traffic to pass. iSCSI traffic usually flows over port 3260.
  4. Provide the iSCSI initiator with the IP address of the Fully Qualified Domain Name (FQDN) of the iSCSI target.
  5. Establish connectivity to the storage pool and map a drive letter.
Setting up an iSCSI storage system is nowhere near as complex a task as some people make it out to be.

28 Jan 2013

How To Change User Password With PowerShell

Easy ways to change or reset a user's local or domain account password using PowerShell.  You do not need any PowerShell modules.  Just built in PowerShell will be used to change the password.

Why do anything with PowerShell when you can already use CMD?

An important concept to grasp is the ability to reuse code and process multiple items.

"I can already use the NET USER command from CMD.  Why use PowerShell?"

We want to use PowerShell for 3 reasons:
  1. It's the new way.  PowerShell creator Jeffrey Snover is now the Lead Architect for Windows Server 8.  Him moving into a role of such importance is the exclamation point on what PowerShell people have been saying since it came out: "Start using it or start getting left behind!"
  2. Reusability.  Even though our simple task of changing a user's password should not be something that needs to be scripted, and then reused, it could be.  And anything you use with PowerShell can be.  It's definitely one of the important reasons we want to use PowerShell when we can.  You never know at the start of a project that you won't want to reuse some part of it on a later project.
  3. Write once, process many.  Once we've got a command to change a password, there's no reason we can't use that command on multiple objects.  It's easy, and that's philosophy is at the core of all automation.
Change every local user account on a computer to have the same password

Lets dive into an example to make a great point about the reusability and the ability to handle multiple password changes with a single command.

First, we need to get a list of users to change the passwords on.  We'll use WMI for this task.

A great thing about using WMI for this task is that it can be used remotely or locally.  If you want to run this on a remote computer (appropriately named "remote-pc") it's almost as easy – no extra setup required (firewall exceptions and RPC come in handy though.)

How To Get a List of Local User Accounts with PowerShell:

$userlist  = get-wmiobject win32_useraccount

How To Get a List of Local User Accounts from Remote Computer with PowerShell:

$remoteuserlist = get-wmiobject win32_useraccount –computername "remote-pc"
 
Change Password using ADSI object

Secondly, we'll look at how to change a password using the ADSI object in .NET.

To get the ADSI object for a user, we need the computer name that the local account belongs to, and the user account name.  One thing that we can use from the list of user accounts from WMI that we got with PowerShell is that the computer name and user account name are both listed in a property called "Caption".   Too bad the Caption property uses a backslash ("\") and ADSI uses a slash ("/").  To get around that, I will use the string method Replace().  That part looks like this:

$userlist[0].caption.replace("\","/")

There are several options for telling the ADSI class what kind of account to connect to.  Two of those I find : WinNT (for local accounts), and LDAP (for domain accounts).  Those are case sensitive. Since we're looking for a local account, it will be in this format:

[adsi]"WinNT://local-PC/accountname"

Retrieving a domain user account is just as easy.  It's in the form:


How to Set a Password For a User Account

There are two options for creating the user account object:  use it one time then it's gone, or save it as a variable.

If you're going to do a password reset, and will not be making any other changes, you can use this simple method of creating your user object instance.  Just use parentheses around the object, and this will ensure that the object is able to be referenced and methods can be run from it.  It looks like this:

([adsi]"WinNT://Remote-PC/AccountName").SetPassword("Shazbot!")

After this runs, the password for the account "Accountname" on the computer "Remote-PC" is set to "Shazbot!".  There is no further reference to the object in the script.

If you know the computer name, and the account name, this one liner will set your password:

([adsi]"WinNT://<Local or Remote Computer Name>/<Username>").SetPassword("<Password>")

On the other hand, if you're going to need to account for anything else, such as viewing or changing any other property on the account, then you should keep it as a variable.  It's very similar to the first method:

[adsi]$userVariable = "WinNT://<Local or Remote Computer Name>/<Username>"

Now you have a variable that represents the user account, and you can change the password using the same SetPassword() method used above.

$userVariable.SetPassword("Gilligan+Skipper=TrueLove4Ever")

How To Reset All Local User Accounts on a Computer to the Same Password

Here's a quick one-liner that sets all user accounts on a computer to have the same password.

Get-WmiObject win32_useraccount | Foreach-Object {([adsi]("WinNT://"+$_.caption).replace("\","/")).SetPassword("FluxCapacitor!11-5-1955")}
If that seems like gibberish to you, here's the translation:

Get the local user accounts from WMI, and since we're not done with those objects, they are passed through the pipeline.  They are the input for the Foreach-Object, which applies everything that's in the scriptblock to each individual user account.  When being passed through the scriptblock, individual user accounts are referenced by the "$_".  We build a string like "WinNT://computer/user" by switching the "\" symbol from the user accounts "caption" property into a "/" by the use of the string method Replace.  After the string is put together, [adsi] processes and creates an ADSI reference to the real user account on the computer.  The SetPassword method is called on the object, which sets the password to "FluxCapacitor!11-5-1955".  It performs that on each local user account, and then it's done.

25 Jan 2013

Configuring Hyper-V Replica in Windows Server 2012

This month, I want to talk about a new feature in Windows Server 2012 called Hyper-V Replica. This feature provides asynchronous replication over a network of virtual machines (VMs) for the purposes of disaster recovery. If a disaster occurs at a primary site, productivity can be quickly restored by bringing up the replicated VM at the "replica" site, as you see in Figure 1.


One of the first points I want to bring up is that Hyper-V Replica is a disaster recovery solution and not a high availability solution. If the primary site goes down, manual intervention is needed to get the offsite VM replicas up. In a highly available solution (using multisite failover clustering), if the primary site goes down, the offsite VMs automatically come up without manual intervention. This is a question that we get quite a bit, so we wanted to clear up any misconceptions that you might have about what Hyper-V Replica offers.

Hyper-V Replica will track write operations on the primary VM and replicate these changes to the replica server every 5 minutes. The network connection between the two servers uses the HTTP or HTTPS protocol and supports both integrated and certificate-based authentication. So, at any point in time, the "replica" should be no more than 5 minutes behind. 

Prerequisites
Hyper-V Replica is affordable and doesn't require any complicated configurations. As long as both sites are running Server 2012 Hyper-V and have network connectivity between the two sites, it's certainly a disaster recovery solution worth considering. Hyper-V Replica is available only in Server 2012 and not in the client (Windows 8) Hyper-V.To take advantage of Hyper-V Replica, you must meet the following prerequisites:

Your hardware must support the Hyper-V role in Server 2012.
Your primary and replica servers must have sufficient storage and physical memory to host the VMs.
You must have network connectivity between the locations hosting the primary and replica servers.
Properly configured firewall rules must permit replication between the primary and replica sites.
You must have an X.509v3 certificate to support mutual authentication with certificates (if desired or needed).
Set Up Hyper-V Replica
To set up Hyper-V Replica, you'll need to go into the Hyper-V settings in Hyper-V Manager. In the right-hand pane of the dialog box that Figure 2 shows, you'll see the Replication Configuration settings once it is enabled. By default, this configuration isn't enabled.


You'll need to enable replication on both servers, and the settings will need to match. The first option to consider is Authentication and ports. You have the option of using Kerberos (HTTP) over port 80 or certificate-based authentication (HTTPS) over port 443. These are the default ports, but you can change them if desired. If you change the ports, you'll also need to change the port numbers in the firewall rule (discussed later).

The other option is Authorization and storage. This setting determines which servers will be participating in the Hyper-V Replica. It also specifies the local folder where the replica files will reside. You can choose to use any authenticated server or specific servers. If you choose any server, you can specify a folder location here. If you choose specific servers, the Add button will let you specify a server and folder.

After you set everything up and click OK, a warning box states Inbound traffic needs to be allowed in the Firewall. There are two inbound firewall rules that are on a replica server: Hyper-V Replica HTTP Listener (TCP-In) and Hyper-V Replica HTTPS Listener (TCP-In). Depending on the Authentication and ports selection you make, the proper rule to enable will appear in the dialog box. These rules aren't automatically enabled. If you don't enable the rule, the primary server won't be able to make the connection to the replica server. If you can't make your connection as a replica, the firewall rule and the replica settings should be what you check first. The proper rule must be enabled, and the port it is using must also be configured. So, if you're using a firewall that comes with another product, you need to ensure that the port is open. The same will need to be set for any routers or gateways between the servers.
 
Configure VMs for Replication

Once you've configured the Replication Configuration settings, you need to configure the VMs for replication. Hyper-V Replica is on a per-VM selection. You can have all VMs or a subset of VMs, but each must be configured separately.
  1. On the primary Hyper-V server, right-click the VM and select Enable Replication from the drop-down list to start the Enable Replication wizard for the VM.
  2. On the Specify Replica Server screen, enter either the NetBIOS name or the Fully Qualified Domain Name (FQDN) for the replica server in the Replica server box, and click Next.
  3. On the Specify Connection Parameters screen, input the port to use and the authentication type. As long as Remote WMI is enabled, these settings will be filled out for you. Ensure that you double-check them, because if they're inaccurate, you'll receive an error and the replica won't work.
  4. The Choose Replication VHDs screen will list all of the .VHD files that the virtual machine has. You can select the disk or disks that you want to replicate for the VM, then click Next. Keep in mind that if you need to bring the replica up, any .VHDs that you didn't select previously won't appear. If the disk contains data that is important for the VM, it won't be available.
  5. Replication changes are sent to a replica server every five minutes. On the Configure Recovery History screen, which Figure 3 shows, make selections for the number and types of recovery points to be sent to the replica server. If you choose Only the latest recovery point, only the parent VHD will be sent during initial replication and all changes will be merged into that VHD. If you choose Additional recovery points, you'll need to set the number of additional recovery points (standard replicas) that will be saved on the replica server.
  6. On the Choose Initial Replication Method screen, which Figure 4 shows, several methods are available for performing an initial replication of the VM to the replica server. The default selection is Send initial copy over the network. This option starts replication immediately over the network to the replica server. If you don't want to perform immediate replication, you can schedule it to occur at a specific time on a specific date. If you wish to not have the initial copy sent over the network, you can choose Send initial copy using external media. This option lets you copy the VM data to an external drive, DVD, USB stick, or other media and move it to the replica server.
  7. Click Finish. If the firewall port hasn't been enabled, you'll receive an error message.




Once the wizard finishes, in the Hyper-V Manager console, you'll see the name of the VM on both the primary and replica servers. You can change this nomenclature. Hyper-V Replica will track the VM by its Virtual Machine ID. So, on the primary server, you can have the name Windows8 and on the replica server, you can call it Windows8-Replica. On the replica server, you'll see the VM and it will be turned off. The replica machines will be prevented from being turned on. If you attempt to turn on the VM, you'll see the error message that Figure 5 shows.


Another popular question is whether you can set up multiple replicas. The answer is no: You can have only one primary replica server that the VM runs on, and one replica server that holds the copy of the VM. However, you can have multiple Hyper-V servers participating in a replica. For example, suppose I have a Hyper-V server called HyperV1 that is running two VMs (Acct-File-Server and HR-File-Server). I can also have two other Hyper-V servers called HyperV2 and HyperV3.

To set this up, I would need to go into the Hyper-V Settings on all the physical machines. Considering the example in Figure 2 above, if you're configured for Allow replication from any authenticated server, you're good. If you selected Allow replication from the specified servers, you'll need to ensure that both of the other machines are in the list. When going through the Enable Replication for Acct-File-Server wizard, the Specify Replica Server page will show the HyperV2 server. When going through the Enable Replication for HR-File-Server wizard, the Specify Replica Server page would show the HyperV3 server. Figure 6 illustrates this scenario.


Now that everything is configured, what's next? When you right-click a VM, you'll see Replication in the drop-down list, along with several options depending on whether you're on the primary server or the replica server. If you're on the primary server, you'll see the Planned Failover, Pause Replication, View Replication Health, and Remove Replication options. If you're on the replica server, you'll see the Failover, Test Failover, Pause Replication, View Replication Health, and Remove Replication options.

Planned Failover. A planned failover is a controlled action you take when you know that the primary Hyper-V server or site is going to be down. A planned failover will make the replica server the primary server, and vice versa. This action is only available on the primary replica server. There's a series of checks you need to make before a planned failover, as well as some actions that the process will take.

Prerequisite check:
Check that the VM is turned off.
Check configuration for allowing reverse replication.
Actions:
Send data that has not been replicated to replica server.
Fail over to replica server.
Reverse the replication direction.
Start the replica VM.
Failover. Failover is an action that occurs in the event of an unplanned outage. If the primary site goes down, you must select Failover from the secondary replica server on the VM so that it now becomes the primary site and will start. Once the primary site comes back up, you would select Planned Failover to reverse it back. This selection is available only on the replica server. An additional check will ensure that this is the proper action to take. Before it does this, it will try to contact the primary server. If it can't contact the primary server, it will continue. If it can contact the primary server, it will determine whether the VM is on. If it's on, it won't continue. If Failover is chosen accidentally, you can choose Cancel Failover on the secondary replica server from the Replication drop-down list. Cancelling failover will result in the loss of any changes that occurred in the replica VM after the failover operation started.

Test Failover. This is a controlled action to take where you simply want to test that the replica VM on the replica site will come up. When you choose this action, it will create a new VM on the replica server and tag it with a -Test name so that it can be easily identified. This process might take a while because it copies the entire primary VM. Doing so allows the normal replication of changes to occur between the primary server and the original replica server. When you're satisfied that it functions, you can simply power off the -Test VM and delete it from Hyper-V.
Pause Replication. This is a controlled action to take when you know that the replica server will be going down. Once the replica server is up, you can select Resume Replication. You can take this action on either the primary or replica server.

View Replication Health. This is an action you take to ensure that replication is working and that all the changes are getting across. These statistics can be reset, refreshed, or saved as a .CSV file. You can select this option from both the primary and replica server. You'll see the following items in a popup dialog box:
* Replication State
* Replication Type
* Current Primary Server
* Current Replica Server
* Replication Health
* Statistics
* From Time
* To Time
* Average Size
* Maximum Size
* Average Latency
* Error Encountered
* Successful Replication Cycles
*Pending Replication
*Size of Data Yet To Be Replicated
*Last Synchronized At
Remove Replication. This is an action you would take if you no longer want to have a replica of the VM. You can perform this action on either the primary or replica server. When you've removed the replication, you'll need to manually delete the VM from Hyper-V Manager. 

Hyper-V Replica on a Failover Cluster
You can also use Hyper-V Replica on a Server 2012 failover cluster. To add a failover cluster, you must create the Replica Broker Role in Failover Management. Doing so will create a group in the cluster, to which you provide a Client Access Point (CAP) that includes the name you would connect to. This capability gives you the extra benefit of high availability for the replica or the primary site.

When running Hyper-V Replica on a failover cluster, you won't be able to make any changes in the Hyper-V Manager console. When you go into Hyper-V Manager, Hyper-V Settings, and Enable Replication, everything will be grayed out. At the bottom of the window will be the message This server is part of a failover cluster. Use the Failover Cluster Manager to change replication settings. When running on a failover cluster, any Replica Configuration settings is only needed on only one of the cluster nodes. Because it's a clustered resource, it will have all the changes replicated to the other nodes. 

IP Addressing
The last thing I wanted to bring up deals with the network that the VMs reside on. You need to consider the IP address scheme on the networks of the primary and replica sites/servers. For example, suppose your primary Replica Server has an IP address scheme of 1.x and your secondary replica server has the IP address scheme of 192.168.x. If you use DHCP on the networks and the VM uses DHCP, it will get an address from the DHCP server when it comes up. This is the recommended setting for when Hyper-V Replica servers reside on different networks. If you use static IP addresses, you should set up multiple IP addresses for the VM (one for each network it will be a part of). If you bring up the properties of the VM, there is a new option under Network Adapter called Failover TCP/IP, as Figure 7 shows.


This is necessary for the VM to communicate with the network it will be on. In the example above, if you have a VM with the IP address of 1.1.1.1 (running on the primary replica server) and it's going to run the secondary replica server that uses 192.168.1.1, it won't communicate properly on the network. It will have a different gateway, DNS server, and so on, that it might not be able to communicate with. This can also prevent clients from being able to communicate with the VM.

These VM Failover TCP/IP settings are the settings you would want to set for the replica server/site. The same Integration Services need to be running on both the primary and replica server in order to have the Failover TCP/IP settings give the proper IP Address to the virtual machine based on the server it is currently running on. When the VM is on the primary site, it will have the normal IP address you've set. If it moves to the replica site and starts, it will use the address information that is set under Failover TCP/IP. 

Cost-Effective Option

There are a few key points to remember. Hyper-V Replica is a cost-effective option for disaster recovery of VMs. It isn't an automatic failover to get the replica up; rather, it's all about manual, administrative-controlled actions. You'll need to ensure that you have enough disk space and memory, and you'll need to consider the network scheme on the replica server to accommodate the VM if it needs to start. Hyper-V Replica is available only on Server 2012 and is not available with Windows 8 Hyper-V. You can go from a standalone Hyper-V Server to another standalone server. You can go between a failover cluster running Hyper-V to another failover cluster running Hyper-V. Or, you can mix the standalone Hyper-V server with the failover cluster running Hyper-V. Finally, remember that there's only a primary and replica server. You can't have multiple replicas for an individual VM.

New features in Active Directory Domain Services in Windows Server 2012, Part 20: Dynamic Access Control (DAC)

For the last years, we've been modeling the business into group memberships and their associated access control lists. For some organizations this has even led to changing the way they performed business from before they automated their business processes. For other organizations, this has resulted in token bloat. It's time someone changed that and introduced business logic for file and folder access.     

What's New

Microsoft did exactly that by introducing Dynamic Access Control (DAC).

Dynamic Access Control can best be described as a Claims-based Access Control (CBAC) solution, where claims are placed in tokens. In contrast, Active Directory Federation Services (AD FS), also uses claims, but uses SAML as its protocol for markup and transport. Dynamic Access Control claims are stored in the Ticket Granting Ticket (TGT).

Note:
To use Dynamic Access Control you don't need to install or configure Active Directory Federation Services.

Claims within Dynamic Access Control can be based on any attribute of a user account, Claims can also be based on attributes for computer accounts, but this requires Kerberos Armoring (FAST). When user claims and device claims are combined, this forms the Compound Identity (Compound ID).

Dynamic Access Control information is stored in Active Directory in CN=Claims Configuration,CN=Services,CN=Configuration,DC=domain,DC=tld and is replicated throughout the Active Directory forest.

To use Dynamic Access Control claims to authorize access to files, two methods can be used:

  1. You can define authorization rules and authorization policies within Active Directory, where you can define the proposed and/or enforced rights on files and folders and the scope of the rules. Authorization rules also extend to file classification infrastructure (FCI) this way, so you can even base access rights on user-picked tags for files and folders on a scoped number of File Servers.
  2. The second method is to incorporate claims within access control entries (ACEs) straight into access control lists (ACLs). This is useful for file storage locations that are based on the Resilient File System (ReFS), since this new file system does not support authorization policies (yet).

Arguably, using claims adds complexity when added to an access strategy based on group memberships. Another new feature in Windows Server 2012 helps keep track of access denied. The feature, called Access-denied remediation, enables users, when faced with an Access Denied error, to specify why they think they should be allowed access. This fully customizable message, together with the reason why access was denied is then sent to the Admin responsible for the file server (as defined in File Server Resource Manager). Access Denied Remediation is only available when using SMB 3.0, so, this feature is only available when Windows 8 clients and Windows Server 2012 member servers access Windows Server 2012 File Servers.    

Configuring DAC

InformationalExample case

In this example we'll use File Classification with Dynamic Access Control to authorize the Engineering department to read files and folders of their department. Their managers can modify these files, but only when they try this from computers, designated as computers in the Engineering department.

Step 1

First, we start by creating File Classifications. We perform this action in the Active Directory Administrative Center (ADAC), since these classifications are stored in Active Directory. In the Active Directory Administrative Center, file classifications can be found in the Dynamic Access Control node on the left pane. The screenshot below shows the Dynamic Access Control (DAC) node in Folder View in the Active Directory Administrative Center:

The Dynamic Access Control (DAC) node in Folder View in the Active Directory Administrative Center (click for larger screenshot)

Define Resource Properties

Our first step is to define Resource Properties. For this, open the Resource Properties node underneath the Dynamic Access Control node. You'll notice Microsoft has equipped us with a lot of pre-defined resource properties, so let's use the Department one. Right-click it and select Enable from the context menu:

Enabling the Department resource property in Active Directory Administrative Center (click for larger screenshot)

Tip!
You can enable multiple pre-defined resource properties and even create your own. A perfect example would be Country, which is not pre-defined.

Add Resource Properties to the Property List

When you've enabled the resource properties you'd want to use, add them to a Resource Property list. In the left pane of the Active Directory Administrative Center right-click Resource Property Lists underneath the Dynamic Access Control node and select New and then Resource Property List from the context menu.

Creating a Resource Property List for Dynamic Access Control (click for larger screenshot)

For our example environment we will name this Resource Property List Engineering and we add the Department resource property to it by using the Add… button. OK saves our settings.

Update the Resource Property Lists on the file servers

On the Windows Server 2012-based File Servers, run the Update-FSRMClassificationPropertyDefinition PowerShell command.

Classify files and folders

Now, in the File Explorer on the File Servers classify folders and files. Use the Classification tab to specify Classification. In our example we'll classify the Engineering folder as Engineering data. Navigate towards the folder you'd want to classify, right-click it and then select Properties from the context menu. Go to the Classification tab:

Classifying files and folders on the Classification tab (original screenshot)

Since the Department Resource Property is the only enabled Resource Property it will be the only Resource Property available to the File Server(s). To use it, click it. Then, in the Value box, select Engineering. Press OK when done.

Step 2

Now that we've put the built-in File Classification Infrastructure (FCI) to good use, it's time to define our authorization decisions based on the classifications.

Define Central Access Rules

Back into the Active Directory Administrative Center (ADAC), we now open the Central Access Rules node underneath the Dynamic Access Control node in the left pane. By default, this node is empty, so we're making our own Central Access Rule.

Right-click Central Access Rules, select New and then select Central Access Rule from the context-menu:

Create a Central Access Rule in Active Directory Administrative Center (click for larger screenshot)

We'll call the Access Rule Engineering Access. As targets we'll target all files classified with the Engineering department through the Edit… button in the Target Resources area. As Permissions we choose to Use following permissions as current permissions. We then add Permissions with the Edit… button in the Permissions area. While in the Advanced Security Settings for Permissions screen, click Add.

Creating a Permission entry for Permissions (click for larger screenshot)

In the Permission Entry for Permissions screen, at the top, we select the Authenticated Users as the Security Principal filter. Then, we create a condition with the business logic behind the access for Engineers. Members of the Engineers group get read access to Files and Folders (resources), classified as Engineering. The above screenshot shows these choices.

Of course, for the members of the Engineering Managers group, we build a second Central Access Rule, where we grant them Modify rights, based on their Engineering Managers group membership and based on the department of their device.

Note:
Since we're basing authorization decisions on computer objects with the Engineering department attribute, make sure the right computers have this attribute configured.

Add Rules to a Central Access Policy

With the rule set in place, we can now create the Central Access Policy (CAP) that will utilize the rule set to make authorization decisions with a defined scope.

In the Active Directory Administrative Center (ADAC), we now open the Central Access Policies node underneath the Dynamic Access Control node in the left pane. By default, this node is empty, so we're making our own Central Access Policy. Right-click the Central Access Policies node, select New and then Central Access Policy from the context menu.

In our example environment, we'll name the Central Access Policy Engineering AuthZ and with the Add… button in the Member Central Access Rules area we add the Engineering Access Central Access Rule.

Deploy the CAP to File Servers using Group Policy

Using Group Policy we will now be scoping the Central Access Policy. Open the Group Policy Management Console (GPMC) and navigate to the Organization Unit (OU) containing the File Servers with Engineering data on them. Right-click the Organizational Unit and choose Create a GPO in this domain and Link it here…. In our example environment, we'll name the new Group Policy Engineering Access. Now, right-click the newly created Group Policy and select Edit… from the context menu.

Scoping Central Access Policies with Group Policy (click for larger screenshot)

In the Group Policy Management Editor, in the left pane, navigate to Computer Configuration, Policies, Windows Settings, Security Settings, File System and then right-click Central Access Policy to select Manage Central Access Policies… from the context menu.

Select the Engineering AuthZ Central Access Policy from the list on the left and click the Add > button to make it appear in the list on the right. When done, click OK.

Now you can close the Group Policy Management Editor. To update the Group Policies on the File Server, either wait for the Group Policy Background Refresh Interval, run gpupdate.exe on the console of the File Server, or force a Group Policy Refresh from the Group Policy Management Console.

Tip!
Remote Group Policy Refresh is a new feature in the Group Policy Management Console (GPMC) that is part of Windows Server 2012 and part of the Remote Server Administration Tools (RSAT) for Windows 8.

Select the CAP to apply

When the Group Policy has applied, you can apply the Central Access Policy to the Engineering folder. Right-click the Engineering folder and select Properties. Click on the Security tab and then click on Advanced. In the Advanced Security Settings for Engineering screen, navigate to the Central Policy tab. On this tab, select the Engineering AuthZ as the Central Access Policy to apply. Click OK three times.      

Requirements

Windows Server 2012-based Domain Controllers

Dynamic Access Control requires at least one Windows Server 2012-based Domain Controller. File Servers, where you want to use claims-based access control, also need to be running Windows Server 2012.

Tip! 
Certain Storage Area Network (SAN) manufacturers are working closely with Microsoft to enable their equipment for claims-based access control and Dynamic Access Control.

Make sure sufficient Windows Server 2012-based Domain Controllers are present to process client requests.

Note:
When Compound ID is used, Windows 8-based clients will only communicate with Windows Server 2012-based Domain Controllers. Compound ID is only available in Windows 8, not in previous versions of Windows.     

Forest Functional Level

The Forest Functional Level needs to be Windows Server 2003.

Windows Server 2012-based File Servers

File Servers, where you want to use claims-based access control, also need to be running Windows Server 2012. On these servers the File Server Resource Manager Server Role needs to be installed. The following PowerShell one-liner can be used for this purpose:

Install-WindowsFeature FS-Resource-Manager  -IncludeManagementTools

Since Dynamic Access Control uses Group Policies to manage File Servers it's a good idea to create a separate Organization Unit (OU) for File Servers as part of your Windows Server 2012 Active Directory design.

Backward compatibility

Dynamic Access Control works with previous versions of Windows as DAC clients. Windows 7 and Windows Server 2008 R2 have been thoroughly tested.

New features in Active Directory Domain Services in Windows Server 2012, Part 19: Offline Domain Join Improvements

With Windows 7 and Windows Server 2008 R2 Microsoft introduced a new Active Directory feature called Offline Domain Join (ODJ). This feature allows for clients to be joined to an Active Directory domain, without the need of having a direct connection to any of the Domain Controllers for the Active Directory domain.

Scenarios

Offline Domain Joins use useful when you want to join a computer to an Active Directory infrastructure without the need for direct communication between the client and a Domain Controller. These include:

  • Deploying vast amounts of computers, without straining Domain Controllers in terms of bandwidth and processing, which might affect existing domain-joined computers. Also, in this scenario Offline Domain Join saves time.
  • Deploying domain-joined computers to a branch office site where only Read-only Domain Controllers reside. (Read-only Domain Controllers are not suitable for clients joining the domain)
  • Deploying domain-joined computers to users in remote locations (like homes) that from time to time require access to resources in an Active Directory environment and may not have a high enough quality connection to the Domain Controllers.

Although Offline Domain Join can be used to join computers to a domain without a direct connection to an Active Directory infrastructure, at one point a domain-joined computer needs to connect to a Domain Controller on a regular basis to stay a part of the Active Directory infrastructure. (unless you want to spend the rest of the lifetime of the domain-joined computer feeding it offline domain join information…)     

What's New

Offline Domain Join has been extended by allowing the blob to accommodate Direct Access prerequisites:

  • Root Certificates
  • Certificates
  • Group Policies

This means the number of scenarios increase. One of the main scenarios that now gets included is deploying domain-joined computers to DirectAccess users. The clients now have everything they need to successfully connect to the DirectAccess server(s).

Note:
A Graphical User Interface to perform Offline Domain Joins is not part of the improvements to Offline Domain Join in Windows 8 or Windows Server 2012.     

Performing Offline Domain Joins

Let's look at the individual steps of the process:

Step 1

To kick off the Offline Domain Join an administrator would logon to the Windows Server 2012-based Domain Controller. When logged on with an account with sufficient permission and quota to create Computer Accounts, the administrator would provision the client on the Domain Controller itself with the following command:

djoin.exe /PROVISION /DOMAIN DomainName /MACHINE MachineName /SAVEFILE FileLocation

Here's an example:

djoin.exe /PROVISION /DOMAIN domain.local /MACHINE Win8-2 /SAVEFILE C:\ODJBlobs\Win8-2.b64

Now, In situations where you'd want to include Root Certificates, Certificates or Group Policies in the blob, extend the command above with the following extra switches:

Include root certificates /RootCACerts
Include certificate templates (includes their root certificates) /CertTemplate
Include Group Policies /PolicyNames

An example of such a command would be:

djoin.exe /PROVISION /DOMAIN domain.local /MACHINE Win8-3 /POLICYNAMES CompanyLookAndFeel /SAVEFILE C:\ODJBlobs\Win8-3.b64    

Step 2

This command, when successful, creates the in the location specified. This file, which can be given any file extension is a Base64-encoded file, containing all the necessary information for a Windows 7 client to join the Active Directory domain.

When you open the file and pull it through a base64-decoder the contents of the file become clear.

Inside the blob

Let's take a look at the Offline Domain Join blob we created in step 1 for Win803 in the domain.local domain:

Sample Offline Domain Join Blob (original screenshot)


This file doesn't really provide any insight into how Offline Domain Join works its magic, but as mentioned earlier, the Offline Domain Join (ODJ) blob is a base64 encoded file. This means we can put it through a Base64 decoder like this one, or convert it using one of the many online base64 decoders. This will result in outcome similar to the text file below:

Sample decoded Offline Domain Join blob (original screenshot)

As you can clearly see, the decoded file contains the information you'd suspect a client needs to join the domain. The decoded file contains the DNS domain name (domain.local), the workstation NetBIOS name (Win8-02), the computer password, the NetBIOS domain name (DOMAIN), the name of the Domain Controller (DC01.domain.local), it's IPv4 address (10.8.255.1) and the Active Directory site. (Default-First-Site-Name). Also, it includes the policy settings in the CompanyLookAndFeel Group Policy.

The nature of the decoded file, also warrants the security note placed in the Offline Domain Join (Djoin.exe) Step-by-Step Guide:

The base64-encoded metadata blob that is created by the provisioning command contains very sensitive data. It should be treated just as securely as a plaintext password. The blob contains the machine account password and other information about the domain, including the domain name, the name of a domain controller, the security ID (SID) of the domain, and so on. If the blob is being transported physically or over the network, care must be taken to transport it securely.   

Step 3

After the Offline Domain Join blob gets transferred to the would-be client, a local administrator can join the computer to the domain in an offline fashion by typing the following command in an (elevated) command prompt:

djoin.exe /REQUESTODJ /LOADFILE FileLocation /WINDOWSPATH WindowsPath /LOCALOS

As an example, here's the command for Win8-03:

djoin.exe /REQUESTODJ /LOADFILE C:\Win8-03.b64
/WINDOWSPATH C:\Windows /LOCALOS

Note:
Alternatively, you can include the Offline Domain Join blob in an unattend.xml file.

After the client successfully works through the command, the would-be client reboots as a member of the Active Directory domain. On first contact between the client and the Active Directory domain, the client would reset its Computer Account password.

Requirements

Operating system requirements

Offline Domain Join requires Djoin.exe. This command is available on domain-joinable editions of Windows 7 (Professional, Ultimate, Enterprise), domain-joinable editions of Windows 8 (Professional, Enterprise), Windows Server 2008 R2 and Windows Server 2012.

Note:
When you want to include root certificates, certificate templates and group policies, you will need to run djoin.exe on Windows 8 or Windows Server 2012.

If you want to use djoin.exe with Windows Server 2008-based Domain Controllers, use the /downlevel switch when you provisioning.

credential requirements

The djoin.exe command needs to be run by a user account with sufficient permissions to create computer accounts. By default, members of the Domain Admins group can create computer accounts. The user right to Add workstations to the domain can be set using Group Policy, or can be granularly delegated using Active Directory Delegation of Control.

New features in Active Directory Domain Services in Windows Server 2012, Part 18: DNTs Exposed

Previously in this series we covered the challenges surrounding RID Depletion in Active Directory. This time around, let's talk about DNTs.

About DNTs

Distinguished Name Tags (DNTs) are integer columns, maintained by the Extensible Storage Engine (ESE) within the Active Directory Database (ntds.dit). Domain Controllers use DNTs when they create objects, either locally or through replication. Each Domain Controller creates and maintains its own unique DNTs within its database when it creates objects. DNTs don't get re-used. DNTs are not shared or replicated between Domain Controllers. Instead, what gets replicated are the computed values known by Domain Controllers, such as SIDS, GUIDs and DNs.

The DNT Challenge

In its lifetime a Domain Controller is limited to creating a maximum of approximately 2 billion DNTs over its lifespan. To be exact, a maximum of 231-255 DNTs can be created. This amounts to 2,147,483,393 DNTs. Since DNTs don't get re-used, re-claimed  or re-serialized, a Domain Controller faces its end when it reaches this limit. Since Windows Server 2012, this limit is suddenly in sight, since the maximum amount of RIDs can now also grow to this limit.

When a Domain Controller hits the limit of maximum DNTs, the Domain Controller needs to be demoted and re-promoted "over the wire". 

Note:
Domain Controllers that are installed with the Install from Media (IfM) option inherit the DNT values from the domain controller that was used to create the IFM backup.

Note:
Domain Controller Cloning under the hood uses Install from Media (IfM)

To expand the DNT Challenge, it is hard in Windows Server to see the amount of DNTs created. IT's do-able, but it requires dumping the database or programmatically interrogate the database. These options are time consuming and impact performance and disk space.

     

What's New

In Windows Server 2012, Active Directory admins can more easily see the amount of DNTs created and face the DNT Challenge.

Investigating used DNTs

In Windows Server 2012, you can investigate the amount of DNTs created through the built-in Performance Monitor. Follow these steps:

  1. Open Performance Monitor by running perfmon.exe.
      
  2. In the left pane expand Data Collector Sets.
        
  3. Right-click User Defined and select New… and Data Collector Set from the context menu.
      
  4. Give the data collector set a useful name and then select Create manually (Advanced) before clicking Next.
        
  5. Select to Create data logs and also select Performance counter. Click Next.
      
  6. Press the Add… button.
  7. In the left pane select NTDS and then select only the Approximate Highest DNT. Then click Add >> and OK.
       
         Tip!
         By default the <localhost> is selected in Performance Monitor. Of course, you
         could use one Data Collector Set to get the amounts of DNTs created on all
         Domain Controllers in the environment.
      
  8. Select an interval (for instance: every 24 hours) and click Finish.
      
  9. In the left pane of Performance Monitor, right-click your newly created Data Collector Set and choose Start from the context menu.
       
  10. Now, every time you want to know the amount of DNTs created on the Domain Controller, open Performance Monitor, in the left pane expand Reports, then expand User Defined and click on the name of the Data Collector Set. In the right pane, now select a report to view.

New features in Active Directory Domain Services in Windows Server 2012, Part 17: LDAP Enhancements

With all the fancy features in Active Directory, I almost tend to forget it was originally an x.500 directory services, offering LDAP connectivity. Although today this is less evident, LDAP is intensively used under the hood for directory connectivity. Alongside the older RPC-based protocols it is used for over 90% of the communication to the Active Directory database. (Notable exceptions are password resets)

Below is an architectural view of Active Directory client/server communication, showing LDAP as the linking pin:

Architectural view of Active Directory client/server communication (click for larger picture)   

What's New

Microsoft has enhanced the LDAP implementation in Active Directory Domain Services for Windows Server 2012.

Enhanced LDAP Logging

Logging is useful for troubleshooting. Active Directory admins can enable logging on Active Directory in the registry through Active Directory Diagnostic Event Logging. This is strong stuff, but it is also exactly what you're craving for in these kinds of situations.

As Microsoft KnowledgeBase article 314980 explains, you can turn on any of the Active Directory Diagnostic Logging categories, available in the registry in the following location:

HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics

Specifically, Microsoft has worked to enhance the LDAP logging of level 5 of the 16 LDAP Interface Events registry DWORD value.

Enabling LDAP logging

To enable LDAP logging, perform the following steps:

  1. Start the Registry Editor by running regedit.exe.
  2. Locate and click the following registry key:
        
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics
        
  3. In the right pane of the Registry Editor, double-click the 16 LDAP Interface Events.
        
    Edit 15 LDAP Interface Events in Registry
        
  4. Type 5 as the logging level in the Value data box, and then click OK.
  5. Close the Registry Editor

This will record all events, including debug strings and configuration changes to the Directory Services log of Event Viewer.

Enhancements

Many admins have given Microsoft precise feedback on scenarios where LDAP logging was getting them nowhere, since they were unable to isolate and/or diagnose root causes of many behaviors/failures with the existing logging capabilities.

In Windows Server 2012, Active Directory Diagnostic Event Logging has been enhanced with additional logging that logs entry and exit stats for a given API. The entries themselves now log the operation name, the SID of the caller's context, the client IP, entry tick and client ID.

New LDAP Controls

The LDAP standard was designed with extensibility in mind. The standard allows the behavior of any operation to be modified through the use of controls. In addition to the repertoire of predefined operations, such as "search" and "modify," the LDAP v3 defines an extended operation. The pair of extended operation request/response is called an extension.

Microsoft has introduced seven new controls in LDAP for Active Directory Domain Services in Windows Server 2012:

Batched extended-LDAP operations
(1.2.840.113556.1.4.2212)

This new LDAP Control modifies the default behavior to treat all operations within a given batch as a single transaction. When using this control, all operations in the batch will fail or all will succeed. This new control is particularly useful for programmatic schema extensions since the entire list of updates could be treated as a batch.

Require server-sorted search use index on sort attribute (1.2.840.113556.1.4.2207)

This LDAP control eliminates the need for tempTable when performing sorted search, thereby increasing scale possibilities. It is particularly good for large result sets because, in the past, sorted searches would have simply failed.

DirSync_EX_Control (1.2.840.113556.1.4.2090)

This LDAP control alters the traditional DirSync behavior and forces the return of specified unchanged attributes.

TreeDelete control with batch size (1.2.840.113556.1.4.2204)

In Windows Server 2008 R2 and earlier, the LDAP batch size is hard-coded with a limit of 16K. This new LDAP control, exposes a mechanism to lower this hard-coded default allowing the delete operation to declare its own batch size. This ensures deletions do not slow convergence beyond system tolerance.

Include ties in server-sorted search results (1.2.840.113556.1.4.2210)

This LDAP Control, also referred to as the "soft size limit", returns additional data on ties when performing sorted searches. Within the context of a sorted search, two objects are considered "tied" if their attribute values for the sorted attribute are the same, i.e. the objects are tied by virtue of the common value in the sort attribute (same place in the index).

Using this LDAP control admins can page between sorted search results with large results sets, with a soft size limit, to limit the page-size and ensuring requests are distributed across Domain Controllers, instead of being targeted at one single Domain Controller, because it is the only Domain Controller that knows where pages begin and end.

Return highest change stamp applied as part of an update (1.2.840.113556.1.4.2205)

This LDAP control is similar to searchStats control in that when checked in, it causes the result to contain additional data. This data is the invocationID and highest USN allocated during the transaction (in BER encoded format).

This LDAP control is useful for programmatically determining convergence between any two instances, immediately following an update.

Expected entry count (1.2.840.113556.1.4.2211)

This LDAP control requires a minimum and maximum value. This control is useful for uniqueness, in-use and/or absence checking. Of course, this new LDAP Control is very powerful when combined with the new Batched extended-LDAP operations LDAP Control.

New features in Active Directory Domain Services in Windows Server 2012, Part 16: Active Directory-based Activation

Windows Genuine Advantage and Windows Activation has been hunting admins trying to make their legally purchased volume licenses seamlessly work for them for years.

About volume activation

While Windows XP volume activation was straightforward, Microsoft felt its volume license product keys were misused in less legal situations; according to the Volume Activation 2.0 FAQ available here, "Volume License keys represent the majority of keys involved in Windows piracy". Back then, Windows XP volume license keys did not require activation by Microsoft-owned servers.

With Windows Vista, Microsoft introduced Volume Activation 2.0. Within this program, Microsoft made every customer either check every product key with Microsoft's hosted activation services (Multiple Activation Keys) or make a KMS host report on product key usage for an entire organization.

About KMS

Key Management Services (KMS) is an on-premises server-client model for volume activation. KMS clients use DNS to find KMS hosts and communicate with them using TCP port 1688 (by default). The choice of KMS host Operating System and the (in)availability of KMS host Windows Updates determines the Windows, Windows Server and/or Office activation possibilities.

In some environments end users were faced with warnings of unlicensed software usage due to inactivity (a KMS client needs to connect to the KMS every 180 days). In other environments the initial activation count to use KMS was not reached or hosts in perimeter networks (DMZs) and/or isolated networks were not allowed to contact the KMS host(s). In both these environments admins had to resort to using MAKs.

About MAK

Multiple Activation Keys (MAKs) are used for one-time activations with Microsoft. Each Multiple Activation Key an admin can punch in has a predetermined number of allowed activations. This number is based on the volume licensing agreement. Each activation using a MAK with Microsoft's hosted activation service counts towards the activation limit.

There are two ways to activate computers using MAK: MAK Independent and MAK Proxy activation. MAK Independent activation requires that each computer independently connect and activate with Microsoft, either over the Internet or by telephone. MAK Proxy activation enables a centralized activation request on behalf of multiple computers with one connection to Microsoft. MAK Proxies are useful for environments that do not maintain a (transparent) connection to the Internet. MAK Proxy activation is configured using the Volume Activation Management Tool (VAMT).

About VAMT

The Volume Activation Management Tool (VAMT) is a free tool that admins can download and use to centrally alter the volume activation method and product key for clients.      

What's New

Windows Server 2012 introduces the concept of Active Directory-based Activation does.

Automatic activation with domain membership

In environments with Active Directory-based Activation configured, when you join a Windows computer to an Active Directory domain, the Windows and/or Office installations on that computer will automatically activate. Activation is valid for 180 days. During this time the client should communicate with a Domain Controller at least once to renew the activation period.

When you remove a computer from the domain, the Windows and/or Office installations immediately get deactivated.

No activation threshold

Where KMS requires 25 (physical) Windows installations or 5 Windows Server installations to begin centrally activating, Active Directory-based Activation does not have a minimum amount of clients or servers to activate.

No host maintenance needed anymore

Although KMS is the preferred volume activation method in environments with more than 25 (physical) clients or five servers, one of the downsides of KMS is the necessity for a KMS host. While the KMS host can coexist with any other Server Role, it adds extra management tasks.

With Active Directory-based Activation no single physical computer is required to act as the activation object, because the activation information is stored in the ms-SPP-Activation object in Active Directory.

Automatic high availability and failover

Since Active Directory-based Activation uses Active Directory Domain Controllers for client-server activation communications, each (R/W) Domain Controller is an available activation host. As an admin you no longer need to manually configure secondary KMS Host DNS records (only the first KMS Host registers the DNS records).

No more dedicated KMS Port

Where KMS used TCP port 1688 (by default) for client-server communication, Active Directory-based Activation uses commonly used Active Directory client-server communication ports.

Works together with KMS

Environments with Active Directory-based Activation are not limited to using only Active Directory-based Activation to activate Windows, Windows Server and/or Office installations. On the same network, also KMS can still be used. This is useful to activate previous versions of Windows, Windows Server and/or Office that do not support Active Directory-based Activation.

Windows 8 and Windows Server 2012 installations will first try to activate through Active Directory-based Activation (act-type 1). When unsuccessful, these installations will try to activate through Key Management Services (act-type 2) and when unsuccessful again will try token-based activation (act-type 3)     

Enabling AD-based Activation

Installing AD-based Activation

Installing Active Directory-based Activation requires installing the Volume Activation Services Server role. This can be done in two ways:

Using Server Manager

To install the Volume Activation Services role using Server Manager perform the following steps:

  1. In Server Manager click on Manage in the top right.
        
  2. Click Add Roles and Features.
        
    Choose Role-based or Feature-based installation in the Select Installation type screen (click for larger screenshot)
        
  3. Select Role-based or Feature-based installation in the Select installation type screen. Click Next >.
        
  4. Select a server to install the Volume Activation Services on from the server pool in the Select destination server screen. Click Next >.
         
    Select Volume Activation Services in the Select server roles screen (click for larger screenshot)
        
  5. In the Select server roles screen, select Volume Activation Services.
        
    Remote Server Administration Tools - Volume Activation Tools (original screenshot)
        
  6. A pop-up appears with a selection of features that are required to manage the Windows Activation Services role. Click Add Features.
        
  7. In the Select server roles screen click Next >.
        
  8. In the Select features screen click Next >.
        
    Introduction to Volume Activation Services (click for larger screenshot)
        
  9. Read the notes in the Volume Activation Services screen and click Next > when done.
        
  10. On the Confirm installation selections page, confirm that the information is correct, then click Install.

Using PowerShell

To install the Volume Activation Services, run the following PowerShell one-liner:

Install-WindowsFeature VolumeActivation –IncludeManagementTools 

Configuring AD-based Activation

To configure Volume Activation Services, log on as an enterprise admin to the server on which you installed the Volume Activation Services Role. Then perform the following steps:

  1. Open Server Manager.
        
  2. Click on Tools in the upper right corner and then click Volume Activation Tools.
        
    Introduction to Volume Activation Tools (click for larger screenshot)
        
  3. In the Introduction to Volume Activation Services screen, click Next >.
        
    Select Volume Activation Method (click for larger screenshot)
        
  4. In the Select Volume Activation Method screen, select Active Directory-based Activation. Click Next >.
        
    Manage Activation Objects (click for larger screenshot)
        
  5. Enter the KMS Host product key in the Manage Activation Objects screen. Optionally, specify a name for the Active Directory activation object. Click Next > when done. 
         
    Activate Product (click for larger screenshot)
        
  6. Choose to activate the key online or by phone in the Activate Product screen, and then click Commit.
        
    This will add an Active Directory-based activation object to the domain forest. Do you want to continue?
        
  7. A pop-up appears, mentioning an Active Directory-based Activation object will be created. Click Yes.
              
    Activation Succeeded (click for larger screenshot)
        
  8. In the Activation Succeeded screen click Close.

This will create an activation object underneath CN=Activation Objects,CN=Microsoft SPP,CN=Services,CN=Configuration,DC=domain,DC=tld as shown below using ldp.exe:

Activation Object Properties viewed with ldp.exe (click for larger screenshot)

After replication of the object and its attributes, Windows client installation, Office installations and Windows servers installations, covered by the KMS Host product key and configured with their corresponding setup keys, will activate automatically.

Tip!
Volume License (VL) downloads for Windows and Windows Server are automatically configured with their corresponding setup keys and thus require no changes for automatic activation by KMS or Active Directory-based Activation.   

Reporting on activated licenses

To report on activated licenses and thus present proof of license compliance, Volume Activation Management Tool (VAMT) version 3 can be used.

Note:
VAMT 3.0 can only be installed on Windows 7, Windows 8, Windows Server 2008 R2 and Windows Server 2012 installations. PowerShell 3.0

Installing VAMT 3

To install Volume Activation Management Tool (VAMT) 3.0 begin by downloading the Windows Assessment and Deployment Kit (ADK) for Windows 8. When this 700KB download finishes, run it:

  1. In the Specify Location screen, click Next.
         
  2. Make the choice to participate or not in the Join the Customer Experience Improvement Program (CEIP) screen and click Next.
        
  3. Click on the Accept button in the License Agreement screen.
        
    Select the features you want to install during Assessment and Deployment Kit installation (click for larger screenshot)
        
  4. Select to install the Volume Activation Management Tool (VAMT) and Microsoft SQL Server 2012 Express in the Select the features you want to install screen. 
              
         Tip!
         Existing SQL installations can be used by the Volume Activation Management
         Tool. When your environment already features a SQL Server and you want to
         use this installation to host the VAMT database, you don't have to select the
         Microsoft SQL Server 2012 Express
    option. (In this case specify the
         hostname of the SQL Server during step 2 of configuring VAMT 3.)
        
    Click Install when done.
  5. The selected features will now be installed. After installation the Welcome to the Assessment and Deployment Kit! screen will appear. Click Close.

Configuring clients for reporting

By default, the Volume Activation Management Tool (VAMT) will not be able to communicate with activation clients. It uses WMI. By default, this type of traffic is blocked by the Windows Firewall. To enable it, follow these steps:

  1. Log on to a Domain Controller, or a management workstation with the Remote Server Administration Tools (RSAT) installed, with sufficient permissions to create and/or modify group policies.
      
  2. Start the Group Policy Management Console (GPMC) by either typing its name in the Start Menu or Start Screen, or by running gpmc.msc.
      
  3. Either create a new Group Policy object and target it at the hosts you want to report on, or select an existing Group Policy object that already targets them. Right-click the Group Policy object and select Edit… from the context menu. This will launch the Group Policy Editor.
       
  4. Navigate to Computer Configuration, Policies, Windows Settings, Security Settings and then expand the Windows Firewall with Advanced Security node twice.
        
  5. Right-click Inbound Rules and select New Rule… from the context menu.
        
    Selecting the Windows Management Instrumentation (WMI) predefined rule for Inbound Rules (click for larger screenshot)
        
  6. In the Rule Type screen, select Predefined as the rule type and select Windows Management Instrumentation (WMI) from the pull-down list. Click Next >.
        
  7. In the Predefined Rules screen, select only the rule ending on (WMI-in) and click Next > to enable the three predefined WMI rules.
       
  8. In the Action screen, select Allow the connection and click Finish
       
  9. Now the rule will be present in the Inbound Rules node of Windows Firewall with Advanced Security. Modify the rule when you want these rules only to apply when the machine with VAMT on it connects, or when you only want this rule to apply to particular profiles (domain, private and/or public)
      
  10. Close the Group Policy Editor when done.

Wait for the Group Policy Background Refresh Interval to update the Group Policies on each of the targeted domain members (by default this may take up to 120 minutes) or use the central Group Policy Update… command to trigger updating of Group Policy objects by right-clicking the targeted Organizational Units (OUs) and selecting this option from the context menu.

Configuring VAMT 3

With the Volume Activation Management Tool (VAMT) installed, it's time to start it up for the first time and configure its basic settings:

  1. Start the Volume Activation Management Tool by pressing Start and typing VAMT. Then, click the shortcut to the Volume Activation Management Tool in the results pane.
         
    Database Connection Settings for the Volume Activation Management Tool (click for original screenshot)
        
  2. Since this is the first time the Volume Activation Management Tool (VAMT) is started it displays the Database Connection Settings screen. Specify database settings and then click Connect. If unsure what to specify as settings, simply specify .\ADK as Server:, <Create new database> as Database: and give the new database a meaningful name. (in the example above I named the database VAMT30.)
       
  3. In the left pane of the Volume Activation Management Tool interface, right-click Products and select Discover products… from the context menu.
        
    Discovery Options in the Volume Activation Management Tool (click for larger screenshot)
        
  4. In the Discover Products screen, Search for computers in Active Directory is selected by default.
       
         Tip!
         The Volume Activation Management Tool does not offer an option to filter on
         Organizational Units. To make the computer name filter in the Volume Activation
         Management Tool work, implement a useful naming convention.
        
    Click Search when done.
        
  5. When discovery completes, the Volume Activation Management Tool will display the number of machines it found. Click OK.
        
  6. Select Products in the left pane of the Volume Activation Management Tool interface. In the list of products in the middle of the screen (multi)select the products you want to see the license status of. Right-click the selection and select Update license status and Current credential from the context menu.
         
    Updating License Status in the Volume Activation Management Tool (click for larger screenshot)
        
  7. Click Close when done.
      
  8. In the Volume Activation Management Tool interface, now click the root node in the left pane. In the middle pane a license summary is shown:
       
    License Summary in the Volume Activation Management Tool (click for larger screenshot)  
    You can drill down in this summary.
        
    Exporting the summary is available in the right action pane.In the Volume Licensing Reports folder in the left pane, more advanced reports are available.   
        

Requirements

To use Active Directory-based Activation the following requirements need to be met:

  • Active Directory-based Activation requires the Windows Server 2012 schema extensions. This means adprep.exe needs to have been run.
        
  • Active Directory-based Activation requires a domain-joined Windows 8-based management workstation with the Windows Volume Activation Remote Server Administration Tools (RSAT) installed, or a domain-joined Windows Server
    2012-based management server.
    • When volume license reporting is needed, the management workstation/server needs to be installed with the Volume Activation Management Tool (VAMT) version 3.0 installed. VAMT should be run with credentials that have administrative permissions to the Active Directory domain and the KMS host Key should be added to VAMT in the Product Keys node.
    • When proxy activation is needed for isolated environments, the management workstation/server needs to be installed with the Volume Activation Management Tool (VAMT) version 3.0 installed. VAMT 3.0 is part of the Windows Assessment and Deployment Kit (ADK) for Windows 8. The management workstation/server will be the only machine that will need an Internet connection. Click here for instructions on setting up Proxy Activation for Active Directory-based activation.
          
           Note:
           VAMT 3.0 can be installed on Windows 7, Windows 8, Windows Server
           2008 R2 and Windows Server 2012, but requires PowerShell 3.0 and a
           connection to a SQL Server (Express) database.
           
  • Active Directory-based Activation will not work for operating systems earlier than Windows Server 2012 or Windows 8. It also will not work with Microsoft Office 2010. Use KMS volume activation to activate Windows clients and applications that do not support Active Directory-based Activation.
       
  • Supported clients will only communicate to (R/W) Domain Controllers. Activation through Read-only Domain Controllers is not possible. Make sure sufficient (R/W) Domain Controllers are available to clients in remote locations.