31 Jan 2013
The best virtual server storage
Whether you plan to use VMware Inc.'s vSphere, Microsoft Corp's Hyper-V or Citrix Systems Inc.'s XenServer virtualization platforms, you'll need a storage system with sufficient performance capabilities and capacity to support the flexible environment virtual servers allow, but at the same time be manageable by a small IT staff and fit in an SMB budget. In this tutorial on virtual server storage, you will learn how to choose the best virtual server storage for your organization, the pros and cons of Fibre Channel vs. iSCSI, and about the other choices you have available.
GETTING STARTED WITH VIRTUAL SERVER STORAGE
"When planning out the storage side of any virtual environment, determine what kinds of systems you are going to virtualize because a lot of people don't take into consideration the performance and capacity requirements of those systems before building their VM [virtual machine] environments," said W. Matthew Ushijima, a senior consultant for GlassHouse Technologies Inc. "It is important to consider your performance needs from both hardware and software perspective."
Jeff Boles, a senior analyst and director of validation services for the Taneja Group, said that admins should conduct a thorough assessment of current I/O patterns, and then use analysis tools such as Virtual Instruments NetWisdom to determine physical environment storage and performance needs before moving to a virtual environment. Ushijima has also seen Solarwinds Inc.'s Storage Profiler and Aptare Inc.'s StorageConsole Virtualization Manager used to analyze capacity and performance needs.
"Don't virtualize and think you are going to make do with less storage in the process," Boles said. "When you get into more complex applications and more detailed planning, it's best to consult with your application vendors," Boles said.
Editor's Tip: For more information about managing storage in a virtual environment, check out sister site SearchStorage's podcast on storage management tools for virtual environments.
STORAGE ARCHITECTURE
For most environments, networked storage and a multipath architecture will be base requirements. Boles said networked storage will allow you to take full advantage of the flexibility benefits of virtual environments. "You want something that doesn't get in the way of that flexibility," Boles said. "You can move [virtual servers] back and forth between different pieces of hardware, re-provision them, copy them, all that kind of stuff, a lot more easily than in the physical world. You really need shared storage behind the virtual server infrastructure to take advantage of all its capabilities."
A multipath architecture -- having numerous redundant links between servers and storage so the data flow can be broken up into multiple streams and sent over separate links -- is necessary to keep your storage system available in case of a path failure. "In any shared storage environment, I consider [a multipath architecture] to be mission critical," Ushijima said. "I don't care if you are running four servers or 4,000. The last thing you want to do is have a server go down because it dropped its storage connection. The issue only multiplies exponentially for each server you add into a virtual environment."
FIBRE CHANNEL VS. iSCSI
Fibre Channel (FC) is a high-speed technology (up to 8 Gbps) that can be deployed over large distances, but requires relatively expensive equipment and specific administrator knowledge. iSCSI operates over ubiquitous Ethernet technology, but is subject to Ethernet's speed limitations. While 1 Gigabit Ethernet (GigE) is currently the most commonly deployed network technology, 10 Gigabit Ethernet (10 GigE) is available and offers a multiple-factor performance increase. However, a 10 GigE system is more expensive to install and may not be cost-effective for SMBs.
Both Boles and Ushijima recommend assessing your performance needs and available resources to determine which transport system to employ. For SMBs, Boles says iSCSI is the overwhelming leader because of its use of Ethernet and the fact that administrators don't need to have specific FC training to deploy and manage it. "Hands down, if you're looking at a new storage system, and you're an SMB customer, you should be considering iSCSI today," Boles said.
Dell Inc. is no stranger to the SMB market and is a leader in iSCSI storage systems, according to Boles. The company offers iSCSI storage area network (SAN) systems from the EqualLogic PS4000E for SMBs and branch offices to the PS6010XVS, which offers 10 GigE connectors and super-fast solid-state drives (SSDs).
NETWORK-ATTACHED STORAGE
If you have a predominantly network-attached storage (NAS) environment, it's likely an entry-level file-based system will have more built-in horsepower than an entry-level block-based storage system, Boles said. So you might be able to consolidate more workloads with a NAS system. However, because NAS systems are more widely shared, it might be harder to get a detailed view into which systems are using network resources and what your performance needs are, Boles explained.
Multiprotocol systems that serve both file- and block-based data can be a good fit for SMBs and are now widely available. For example, NetApp Inc.'s Data Ontap operating system supports network file system (NFS) and Common Internet File System (CIFS) NAS protocols, as well as block-based iSCSI and FC. EMC Corp.'s Celerra Unified Storage Platform supports NFS, CIFS, iSCSI, and FC.
SHARED SERIAL-ATTACHED STORAGE
A newer technology that is moving into the conversation is shared serial-attached SCSI (SAS), an interface for connecting individual SAS storage disk drives to multiple servers. Shared SAS systems offer simplified management and lower costs, but are limited in distance and you must have SAS-supported servers or SAS adapters.
Dell is also a player in the shared SAS market. The PowerVault MD3200 SAS storage array has single or dual controllers and can scale up to 96 disk drives.
VMWARE VSTORAGE
If you plan on using VMware vSphere hypervisor, consider storage arrays that have integrated VMware's vStorage technology and application programming interfaces (APIs) into their functionality.
VMware's vStorage Virtual Machine File System (VMFS) is a cluster file system that allows simultaneous read and write file access so multiple VMs can access and store data in the same file. Without VMFS, only one server would be able to access a file at a time, increasing system latency and impairing productivity. Storage VMotion is another vStorage technology that allows you to migrate live virtual machine disk files (VMDK) between physical hosts for efficient storage I/O and capacity management.
VMware developed the vStorage APIs to improve the platform's integration with the feature-rich storage arrays and data protection products already on the market. "[The APIs] make use of some array side capabilities and optimize the communication and handling of data between the hypervisor and the array," said Boles.
The vStorage API for Array Integration (VAAI) brings advanced array side features into the hypervisor, including Changed Block Tracking, which allows the storage array to identify changed blocks within VMDK files so it can eliminate previously saved data before transmitting VMDK snapshots over the wire. Another example is VAAI Block Zero, which allows the provisioning of zero-data blocks for populating volumes on the array instead of having the hypervisor provision the zero-data blocks and sending them over the wire. VMware also developed APIs for data protection, multipathing and storage replication adapters.
FROM THE PHYSICAL TO THE VIRTUAL
When you are ready to select a storage system for your virtual environment, there are tools available to help you prepare a physical-to-virtual (P2V) migration strategy. "There are a lot of products from a variety of vendors that provide tools to scan your existing infrastructure and provide the ability to do virtual assessments," Ushijima said. Neither VMware nor Microsoft offers free P2V planners, although VMware offers P2V Accelerator Services as part of its professional services products.
5nine Software offers a P2V planner for both VMware vSphere and Microsoft Hyper-V platforms. The planner collects data about existing hardware in your data center, its utilization, and application workloads to prepare physical-to-virtual migration plans that take into consideration hardware and business requirements, storage performance and capacity needs, costs, and return-on-investment. 5nine Software offers a free edition with limited reporting and workload and cost optimization capabilities.
"Look for solutions from a networked storage vendor that can give you functionality at the VM level," said Boles. "A lot of the time when it comes to networked storage, you end up just parking a whole bunch of VMs on a LUN or out on a storage system and you have to figure out how to manage your storage without breaking the VMs, and whether you should do things inside the VMs or inside the storage. It's often very hard to connect those two. So look for solutions from your storage vendor that let you do advanced storage operations with your VMs."
Advanced operations include taking individual VM snapshots and recovering from those snapshots without impacting other VMs. Also look for replication capabilities so you have disaster recovery options now or down the road. "You'll get the most bang for your buck that way and extend your storage capabilities," Boles said.
iSCSI storage system
Some of iSCSI's popularity in SMBs has to do with server virtualization. And, right now, fault tolerance for virtualization hosts is a big factor that is pushing smaller shops into checking out iSCSI. In a virtual data center, it's imperative to prevent host servers from failing. If a host server were to fail, it would take all of the virtual machines (VMs) residing on it down, too. Since a single host server can contain a dozen or more VMs, a host server failure typically results in a major outage.
So what does this have to do with iSCSI storage systems? Well, to prevent the types of outages that I just described, many organizations cluster their virtualization hosts. That way, if a failure were to occur, the virtual machines can continue to run on an alternate host. Although there are a variety of host clustering technologies, host clustering typically requires the use of a shared storage pool that is accessible to all of the hosts within the cluster. The shared storage pool isn't connected to the cluster nodes through a disk controller. Instead, the cluster nodes communicate with the storage pool over the network through the use of the iSCSI protocol.
So what does this have to do with iSCSI storage systems? Well, to prevent the types of outages that I just described, many organizations cluster their virtualization hosts. That way, if a failure were to occur, the virtual machines can continue to run on an alternate host. Although there are a variety of host clustering technologies, host clustering typically requires the use of a shared storage pool that is accessible to all of the hosts within the cluster. The shared storage pool isn't connected to the cluster nodes through a disk controller. Instead, the cluster nodes communicate with the storage pool over the network through the use of the iSCSI protocol.
Hardware requirements for iSCSI storage systems
The only firm hardware requirement for using iSCSI is that there must be TCP/IP connectivity between the remote storage pool and the computer that needs to connect to it. Beyond that, it is widely considered to be a best practice to route iSCSI traffic over a dedicated, high-speed network connection so that iSCSI traffic won't be delayed by other network traffic, but this is not an absolute requirement. If you use a dedicated network connection (which I highly recommend), then you should use as high of a connection speed as possible. Using faster network connections between your server and your storage pool means lower latency, which is important, especially for I/O intensive applications.
Software requirements for iSCSI storage systems
To establish iSCSI connectivity, you are going to need a special type of software. The iSCSI storage array is usually a collection of disk resources that is physically attached to a Windows or a Linux server. This server runs iSCSI target software. Just as the shared storage pool requires specialized software, so does the server that connects to it. To establish a connection to an iSCSI target, a server must run an iSCSI initiator.
Establishing iSCSI connectivity
Once you have the iSCSI initiator and the iSCSI target software, the next step is to establish iSCSI connectivity. The exact procedure varies depending on the software that you are using. However, these are the five basic steps that are usually required:
- Configure the iSCSI target to make disk resources available as iSCSI storage. On a Windows Storage Server, this means you must create a virtual hard drive and associate it with a specific iSCSI target (you can create multiple targets on a single storage server). When you run the iSCSI initiator, the software will assign the server an iSCSI Qualified Name (IQN).
- Document this name and configure the iSCSI target to allow connectivity from that IQN.
- Configure any firewalls between the server and the shared storage pool to allow iSCSI traffic to pass. iSCSI traffic usually flows over port 3260.
- Provide the iSCSI initiator with the IP address of the Fully Qualified Domain Name (FQDN) of the iSCSI target.
- Establish connectivity to the storage pool and map a drive letter.
28 Jan 2013
How To Change User Password With PowerShell
- It's the new way. PowerShell creator Jeffrey Snover is now the Lead Architect for Windows Server 8. Him moving into a role of such importance is the exclamation point on what PowerShell people have been saying since it came out: "Start using it or start getting left behind!"
- Reusability. Even though our simple task of changing a user's password should not be something that needs to be scripted, and then reused, it could be. And anything you use with PowerShell can be. It's definitely one of the important reasons we want to use PowerShell when we can. You never know at the start of a project that you won't want to reuse some part of it on a later project.
- Write once, process many. Once we've got a command to change a password, there's no reason we can't use that command on multiple objects. It's easy, and that's philosophy is at the core of all automation.
25 Jan 2013
Configuring Hyper-V Replica in Windows Server 2012
- On the primary Hyper-V server, right-click the VM and select Enable Replication from the drop-down list to start the Enable Replication wizard for the VM.
- On the Specify Replica Server screen, enter either the NetBIOS name or the Fully Qualified Domain Name (FQDN) for the replica server in the Replica server box, and click Next.
- On the Specify Connection Parameters screen, input the port to use and the authentication type. As long as Remote WMI is enabled, these settings will be filled out for you. Ensure that you double-check them, because if they're inaccurate, you'll receive an error and the replica won't work.
- The Choose Replication VHDs screen will list all of the .VHD files that the virtual machine has. You can select the disk or disks that you want to replicate for the VM, then click Next. Keep in mind that if you need to bring the replica up, any .VHDs that you didn't select previously won't appear. If the disk contains data that is important for the VM, it won't be available.
- Replication changes are sent to a replica server every five minutes. On the Configure Recovery History screen, which Figure 3 shows, make selections for the number and types of recovery points to be sent to the replica server. If you choose Only the latest recovery point, only the parent VHD will be sent during initial replication and all changes will be merged into that VHD. If you choose Additional recovery points, you'll need to set the number of additional recovery points (standard replicas) that will be saved on the replica server.
- On the Choose Initial Replication Method screen, which Figure 4 shows, several methods are available for performing an initial replication of the VM to the replica server. The default selection is Send initial copy over the network. This option starts replication immediately over the network to the replica server. If you don't want to perform immediate replication, you can schedule it to occur at a specific time on a specific date. If you wish to not have the initial copy sent over the network, you can choose Send initial copy using external media. This option lets you copy the VM data to an external drive, DVD, USB stick, or other media and move it to the replica server.
- Click Finish. If the firewall port hasn't been enabled, you'll receive an error message.
Check that the VM is turned off.Check configuration for allowing reverse replication.
Send data that has not been replicated to replica server.Fail over to replica server.Reverse the replication direction.Start the replica VM.
* Replication State* Replication Type
* Current Primary Server
* Current Replica Server
* Replication Health
* Statistics
* From Time* To Time* Average Size* Maximum Size* Average Latency* Error Encountered* Successful Replication Cycles
*Pending Replication*Size of Data Yet To Be Replicated*Last Synchronized At
New features in Active Directory Domain Services in Windows Server 2012, Part 20: Dynamic Access Control (DAC)
What's New
Microsoft did exactly that by introducing Dynamic Access Control (DAC).
Dynamic Access Control can best be described as a Claims-based Access Control (CBAC) solution, where claims are placed in tokens. In contrast, Active Directory Federation Services (AD FS), also uses claims, but uses SAML as its protocol for markup and transport. Dynamic Access Control claims are stored in the Ticket Granting Ticket (TGT).
Note:
To use Dynamic Access Control you don't need to install or configure Active Directory Federation Services.
Claims within Dynamic Access Control can be based on any attribute of a user account, Claims can also be based on attributes for computer accounts, but this requires Kerberos Armoring (FAST). When user claims and device claims are combined, this forms the Compound Identity (Compound ID).
Dynamic Access Control information is stored in Active Directory in CN=Claims Configuration,CN=Services,CN=Configuration,DC=domain,DC=tld and is replicated throughout the Active Directory forest.
To use Dynamic Access Control claims to authorize access to files, two methods can be used:
- You can define authorization rules and authorization policies within Active Directory, where you can define the proposed and/or enforced rights on files and folders and the scope of the rules. Authorization rules also extend to file classification infrastructure (FCI) this way, so you can even base access rights on user-picked tags for files and folders on a scoped number of File Servers.
- The second method is to incorporate claims within access control entries (ACEs) straight into access control lists (ACLs). This is useful for file storage locations that are based on the Resilient File System (ReFS), since this new file system does not support authorization policies (yet).
Arguably, using claims adds complexity when added to an access strategy based on group memberships. Another new feature in Windows Server 2012 helps keep track of access denied. The feature, called Access-denied remediation, enables users, when faced with an Access Denied error, to specify why they think they should be allowed access. This fully customizable message, together with the reason why access was denied is then sent to the Admin responsible for the file server (as defined in File Server Resource Manager). Access Denied Remediation is only available when using SMB 3.0, so, this feature is only available when Windows 8 clients and Windows Server 2012 member servers access Windows Server 2012 File Servers.
Configuring DAC
Example case
In this example we'll use File Classification with Dynamic Access Control to authorize the Engineering department to read files and folders of their department. Their managers can modify these files, but only when they try this from computers, designated as computers in the Engineering department.
Step 1
First, we start by creating File Classifications. We perform this action in the Active Directory Administrative Center (ADAC), since these classifications are stored in Active Directory. In the Active Directory Administrative Center, file classifications can be found in the Dynamic Access Control node on the left pane. The screenshot below shows the Dynamic Access Control (DAC) node in Folder View in the Active Directory Administrative Center:
Define Resource Properties
Our first step is to define Resource Properties. For this, open the Resource Properties node underneath the Dynamic Access Control node. You'll notice Microsoft has equipped us with a lot of pre-defined resource properties, so let's use the Department one. Right-click it and select Enable from the context menu:
Tip!
You can enable multiple pre-defined resource properties and even create your own. A perfect example would be Country, which is not pre-defined.
Add Resource Properties to the Property List
When you've enabled the resource properties you'd want to use, add them to a Resource Property list. In the left pane of the Active Directory Administrative Center right-click Resource Property Lists underneath the Dynamic Access Control node and select New and then Resource Property List from the context menu.
For our example environment we will name this Resource Property List Engineering and we add the Department resource property to it by using the Add… button. OK saves our settings.
Update the Resource Property Lists on the file servers
On the Windows Server 2012-based File Servers, run the Update-FSRMClassificationPropertyDefinition PowerShell command.
Classify files and folders
Now, in the File Explorer on the File Servers classify folders and files. Use the Classification tab to specify Classification. In our example we'll classify the Engineering folder as Engineering data. Navigate towards the folder you'd want to classify, right-click it and then select Properties from the context menu. Go to the Classification tab:
Since the Department Resource Property is the only enabled Resource Property it will be the only Resource Property available to the File Server(s). To use it, click it. Then, in the Value box, select Engineering. Press OK when done.
Step 2
Now that we've put the built-in File Classification Infrastructure (FCI) to good use, it's time to define our authorization decisions based on the classifications.
Define Central Access Rules
Back into the Active Directory Administrative Center (ADAC), we now open the Central Access Rules node underneath the Dynamic Access Control node in the left pane. By default, this node is empty, so we're making our own Central Access Rule.
Right-click Central Access Rules, select New and then select Central Access Rule from the context-menu:
We'll call the Access Rule Engineering Access. As targets we'll target all files classified with the Engineering department through the Edit… button in the Target Resources area. As Permissions we choose to Use following permissions as current permissions. We then add Permissions with the Edit… button in the Permissions area. While in the Advanced Security Settings for Permissions screen, click Add.
In the Permission Entry for Permissions screen, at the top, we select the Authenticated Users as the Security Principal filter. Then, we create a condition with the business logic behind the access for Engineers. Members of the Engineers group get read access to Files and Folders (resources), classified as Engineering. The above screenshot shows these choices.
Of course, for the members of the Engineering Managers group, we build a second Central Access Rule, where we grant them Modify rights, based on their Engineering Managers group membership and based on the department of their device.
Note:
Since we're basing authorization decisions on computer objects with the Engineering department attribute, make sure the right computers have this attribute configured.
Add Rules to a Central Access Policy
With the rule set in place, we can now create the Central Access Policy (CAP) that will utilize the rule set to make authorization decisions with a defined scope.
In the Active Directory Administrative Center (ADAC), we now open the Central Access Policies node underneath the Dynamic Access Control node in the left pane. By default, this node is empty, so we're making our own Central Access Policy. Right-click the Central Access Policies node, select New and then Central Access Policy from the context menu.
In our example environment, we'll name the Central Access Policy Engineering AuthZ and with the Add… button in the Member Central Access Rules area we add the Engineering Access Central Access Rule.
Deploy the CAP to File Servers using Group Policy
Using Group Policy we will now be scoping the Central Access Policy. Open the Group Policy Management Console (GPMC) and navigate to the Organization Unit (OU) containing the File Servers with Engineering data on them. Right-click the Organizational Unit and choose Create a GPO in this domain and Link it here…. In our example environment, we'll name the new Group Policy Engineering Access. Now, right-click the newly created Group Policy and select Edit… from the context menu.
In the Group Policy Management Editor, in the left pane, navigate to Computer Configuration, Policies, Windows Settings, Security Settings, File System and then right-click Central Access Policy to select Manage Central Access Policies… from the context menu.
Select the Engineering AuthZ Central Access Policy from the list on the left and click the Add > button to make it appear in the list on the right. When done, click OK.
Now you can close the Group Policy Management Editor. To update the Group Policies on the File Server, either wait for the Group Policy Background Refresh Interval, run gpupdate.exe on the console of the File Server, or force a Group Policy Refresh from the Group Policy Management Console.
Tip!
Remote Group Policy Refresh is a new feature in the Group Policy Management Console (GPMC) that is part of Windows Server 2012 and part of the Remote Server Administration Tools (RSAT) for Windows 8.
Select the CAP to apply
When the Group Policy has applied, you can apply the Central Access Policy to the Engineering folder. Right-click the Engineering folder and select Properties. Click on the Security tab and then click on Advanced. In the Advanced Security Settings for Engineering screen, navigate to the Central Policy tab. On this tab, select the Engineering AuthZ as the Central Access Policy to apply. Click OK three times.
Requirements
Windows Server 2012-based Domain Controllers
Dynamic Access Control requires at least one Windows Server 2012-based Domain Controller. File Servers, where you want to use claims-based access control, also need to be running Windows Server 2012.
Tip!
Certain Storage Area Network (SAN) manufacturers are working closely with Microsoft to enable their equipment for claims-based access control and Dynamic Access Control.
Make sure sufficient Windows Server 2012-based Domain Controllers are present to process client requests.
Note:
When Compound ID is used, Windows 8-based clients will only communicate with Windows Server 2012-based Domain Controllers. Compound ID is only available in Windows 8, not in previous versions of Windows.
Forest Functional Level
The Forest Functional Level needs to be Windows Server 2003.
Windows Server 2012-based File Servers
File Servers, where you want to use claims-based access control, also need to be running Windows Server 2012. On these servers the File Server Resource Manager Server Role needs to be installed. The following PowerShell one-liner can be used for this purpose:
Install-WindowsFeature FS-Resource-Manager -IncludeManagementTools
Since Dynamic Access Control uses Group Policies to manage File Servers it's a good idea to create a separate Organization Unit (OU) for File Servers as part of your Windows Server 2012 Active Directory design.
Backward compatibility
Dynamic Access Control works with previous versions of Windows as DAC clients. Windows 7 and Windows Server 2008 R2 have been thoroughly tested.
New features in Active Directory Domain Services in Windows Server 2012, Part 19: Offline Domain Join Improvements
With Windows 7 and Windows Server 2008 R2 Microsoft introduced a new Active Directory feature called Offline Domain Join (ODJ). This feature allows for clients to be joined to an Active Directory domain, without the need of having a direct connection to any of the Domain Controllers for the Active Directory domain.
Scenarios
Offline Domain Joins use useful when you want to join a computer to an Active Directory infrastructure without the need for direct communication between the client and a Domain Controller. These include:
- Deploying vast amounts of computers, without straining Domain Controllers in terms of bandwidth and processing, which might affect existing domain-joined computers. Also, in this scenario Offline Domain Join saves time.
- Deploying domain-joined computers to a branch office site where only Read-only Domain Controllers reside. (Read-only Domain Controllers are not suitable for clients joining the domain)
- Deploying domain-joined computers to users in remote locations (like homes) that from time to time require access to resources in an Active Directory environment and may not have a high enough quality connection to the Domain Controllers.
Although Offline Domain Join can be used to join computers to a domain without a direct connection to an Active Directory infrastructure, at one point a domain-joined computer needs to connect to a Domain Controller on a regular basis to stay a part of the Active Directory infrastructure. (unless you want to spend the rest of the lifetime of the domain-joined computer feeding it offline domain join information…)
What's New
Offline Domain Join has been extended by allowing the blob to accommodate Direct Access prerequisites:
- Root Certificates
- Certificates
- Group Policies
This means the number of scenarios increase. One of the main scenarios that now gets included is deploying domain-joined computers to DirectAccess users. The clients now have everything they need to successfully connect to the DirectAccess server(s).
Note:
A Graphical User Interface to perform Offline Domain Joins is not part of the improvements to Offline Domain Join in Windows 8 or Windows Server 2012.
Performing Offline Domain Joins
Let's look at the individual steps of the process:
Step 1
To kick off the Offline Domain Join an administrator would logon to the Windows Server 2012-based Domain Controller. When logged on with an account with sufficient permission and quota to create Computer Accounts, the administrator would provision the client on the Domain Controller itself with the following command:
djoin.exe /PROVISION /DOMAIN DomainName /MACHINE MachineName /SAVEFILE FileLocation
Here's an example:
djoin.exe /PROVISION /DOMAIN domain.local /MACHINE Win8-2 /SAVEFILE C:\ODJBlobs\Win8-2.b64
Now, In situations where you'd want to include Root Certificates, Certificates or Group Policies in the blob, extend the command above with the following extra switches:
Include root certificates /RootCACerts Include certificate templates (includes their root certificates) /CertTemplate Include Group Policies /PolicyNames
An example of such a command would be:
djoin.exe /PROVISION /DOMAIN domain.local /MACHINE Win8-3 /POLICYNAMES CompanyLookAndFeel /SAVEFILE C:\ODJBlobs\Win8-3.b64
Step 2
This command, when successful, creates the in the location specified. This file, which can be given any file extension is a Base64-encoded file, containing all the necessary information for a Windows 7 client to join the Active Directory domain.
When you open the file and pull it through a base64-decoder the contents of the file become clear.
Inside the blob
Let's take a look at the Offline Domain Join blob we created in step 1 for Win803 in the domain.local domain:
This file doesn't really provide any insight into how Offline Domain Join works its magic, but as mentioned earlier, the Offline Domain Join (ODJ) blob is a base64 encoded file. This means we can put it through a Base64 decoder like this one, or convert it using one of the many online base64 decoders. This will result in outcome similar to the text file below:As you can clearly see, the decoded file contains the information you'd suspect a client needs to join the domain. The decoded file contains the DNS domain name (domain.local), the workstation NetBIOS name (Win8-02), the computer password, the NetBIOS domain name (DOMAIN), the name of the Domain Controller (DC01.domain.local), it's IPv4 address (10.8.255.1) and the Active Directory site. (Default-First-Site-Name). Also, it includes the policy settings in the CompanyLookAndFeel Group Policy.
The nature of the decoded file, also warrants the security note placed in the Offline Domain Join (Djoin.exe) Step-by-Step Guide:
The base64-encoded metadata blob that is created by the provisioning command contains very sensitive data. It should be treated just as securely as a plaintext password. The blob contains the machine account password and other information about the domain, including the domain name, the name of a domain controller, the security ID (SID) of the domain, and so on. If the blob is being transported physically or over the network, care must be taken to transport it securely.
Step 3
After the Offline Domain Join blob gets transferred to the would-be client, a local administrator can join the computer to the domain in an offline fashion by typing the following command in an (elevated) command prompt:
djoin.exe /REQUESTODJ /LOADFILE FileLocation /WINDOWSPATH WindowsPath /LOCALOS
As an example, here's the command for Win8-03:
djoin.exe /REQUESTODJ /LOADFILE C:\Win8-03.b64
/WINDOWSPATH C:\Windows /LOCALOSNote:
Alternatively, you can include the Offline Domain Join blob in an unattend.xml file.
After the client successfully works through the command, the would-be client reboots as a member of the Active Directory domain. On first contact between the client and the Active Directory domain, the client would reset its Computer Account password.
Requirements
Operating system requirements
Offline Domain Join requires Djoin.exe. This command is available on domain-joinable editions of Windows 7 (Professional, Ultimate, Enterprise), domain-joinable editions of Windows 8 (Professional, Enterprise), Windows Server 2008 R2 and Windows Server 2012.
Note:
When you want to include root certificates, certificate templates and group policies, you will need to run djoin.exe on Windows 8 or Windows Server 2012.
If you want to use djoin.exe with Windows Server 2008-based Domain Controllers, use the /downlevel switch when you provisioning.
credential requirements
The djoin.exe command needs to be run by a user account with sufficient permissions to create computer accounts. By default, members of the Domain Admins group can create computer accounts. The user right to Add workstations to the domain can be set using Group Policy, or can be granularly delegated using Active Directory Delegation of Control.
New features in Active Directory Domain Services in Windows Server 2012, Part 18: DNTs Exposed
About DNTs
Distinguished Name Tags (DNTs) are integer columns, maintained by the Extensible Storage Engine (ESE) within the Active Directory Database (ntds.dit). Domain Controllers use DNTs when they create objects, either locally or through replication. Each Domain Controller creates and maintains its own unique DNTs within its database when it creates objects. DNTs don't get re-used. DNTs are not shared or replicated between Domain Controllers. Instead, what gets replicated are the computed values known by Domain Controllers, such as SIDS, GUIDs and DNs.
The DNT Challenge
In its lifetime a Domain Controller is limited to creating a maximum of approximately 2 billion DNTs over its lifespan. To be exact, a maximum of 231-255 DNTs can be created. This amounts to 2,147,483,393 DNTs. Since DNTs don't get re-used, re-claimed or re-serialized, a Domain Controller faces its end when it reaches this limit. Since Windows Server 2012, this limit is suddenly in sight, since the maximum amount of RIDs can now also grow to this limit.
When a Domain Controller hits the limit of maximum DNTs, the Domain Controller needs to be demoted and re-promoted "over the wire".
Note:
Domain Controllers that are installed with the Install from Media (IfM) option inherit the DNT values from the domain controller that was used to create the IFM backup.Note:
Domain Controller Cloning under the hood uses Install from Media (IfM)
To expand the DNT Challenge, it is hard in Windows Server to see the amount of DNTs created. IT's do-able, but it requires dumping the database or programmatically interrogate the database. These options are time consuming and impact performance and disk space.
What's New
In Windows Server 2012, Active Directory admins can more easily see the amount of DNTs created and face the DNT Challenge.
Investigating used DNTs
In Windows Server 2012, you can investigate the amount of DNTs created through the built-in Performance Monitor. Follow these steps:
- Open Performance Monitor by running perfmon.exe.
- In the left pane expand Data Collector Sets.
- Right-click User Defined and select New… and Data Collector Set from the context menu.
- Give the data collector set a useful name and then select Create manually (Advanced) before clicking Next.
- Select to Create data logs and also select Performance counter. Click Next.
- Press the Add… button.
- In the left pane select NTDS and then select only the Approximate Highest DNT. Then click Add >> and OK.
Tip!
By default the <localhost> is selected in Performance Monitor. Of course, you
could use one Data Collector Set to get the amounts of DNTs created on all
Domain Controllers in the environment.
- Select an interval (for instance: every 24 hours) and click Finish.
- In the left pane of Performance Monitor, right-click your newly created Data Collector Set and choose Start from the context menu.
- Now, every time you want to know the amount of DNTs created on the Domain Controller, open Performance Monitor, in the left pane expand Reports, then expand User Defined and click on the name of the Data Collector Set. In the right pane, now select a report to view.
New features in Active Directory Domain Services in Windows Server 2012, Part 17: LDAP Enhancements
Below is an architectural view of Active Directory client/server communication, showing LDAP as the linking pin:
What's New
Microsoft has enhanced the LDAP implementation in Active Directory Domain Services for Windows Server 2012.
Enhanced LDAP Logging
Logging is useful for troubleshooting. Active Directory admins can enable logging on Active Directory in the registry through Active Directory Diagnostic Event Logging. This is strong stuff, but it is also exactly what you're craving for in these kinds of situations.
As Microsoft KnowledgeBase article 314980 explains, you can turn on any of the Active Directory Diagnostic Logging categories, available in the registry in the following location:
HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics
Specifically, Microsoft has worked to enhance the LDAP logging of level 5 of the 16 LDAP Interface Events registry DWORD value.
Enabling LDAP logging
To enable LDAP logging, perform the following steps:
- Start the Registry Editor by running regedit.exe.
- Locate and click the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics
- In the right pane of the Registry Editor, double-click the 16 LDAP Interface Events.
- Type 5 as the logging level in the Value data box, and then click OK.
- Close the Registry Editor
This will record all events, including debug strings and configuration changes to the Directory Services log of Event Viewer.
Enhancements
Many admins have given Microsoft precise feedback on scenarios where LDAP logging was getting them nowhere, since they were unable to isolate and/or diagnose root causes of many behaviors/failures with the existing logging capabilities.
In Windows Server 2012, Active Directory Diagnostic Event Logging has been enhanced with additional logging that logs entry and exit stats for a given API. The entries themselves now log the operation name, the SID of the caller's context, the client IP, entry tick and client ID.
New LDAP Controls
The LDAP standard was designed with extensibility in mind. The standard allows the behavior of any operation to be modified through the use of controls. In addition to the repertoire of predefined operations, such as "search" and "modify," the LDAP v3 defines an extended operation. The pair of extended operation request/response is called an extension.
Microsoft has introduced seven new controls in LDAP for Active Directory Domain Services in Windows Server 2012:
Batched extended-LDAP operations
(1.2.840.113556.1.4.2212)
This new LDAP Control modifies the default behavior to treat all operations within a given batch as a single transaction. When using this control, all operations in the batch will fail or all will succeed. This new control is particularly useful for programmatic schema extensions since the entire list of updates could be treated as a batch.
Require server-sorted search use index on sort attribute (1.2.840.113556.1.4.2207)
This LDAP control eliminates the need for tempTable when performing sorted search, thereby increasing scale possibilities. It is particularly good for large result sets because, in the past, sorted searches would have simply failed.
DirSync_EX_Control (1.2.840.113556.1.4.2090)
This LDAP control alters the traditional DirSync behavior and forces the return of specified unchanged attributes.
TreeDelete control with batch size (1.2.840.113556.1.4.2204)
In Windows Server 2008 R2 and earlier, the LDAP batch size is hard-coded with a limit of 16K. This new LDAP control, exposes a mechanism to lower this hard-coded default allowing the delete operation to declare its own batch size. This ensures deletions do not slow convergence beyond system tolerance.
Include ties in server-sorted search results (1.2.840.113556.1.4.2210)
This LDAP Control, also referred to as the "soft size limit", returns additional data on ties when performing sorted searches. Within the context of a sorted search, two objects are considered "tied" if their attribute values for the sorted attribute are the same, i.e. the objects are tied by virtue of the common value in the sort attribute (same place in the index).
Using this LDAP control admins can page between sorted search results with large results sets, with a soft size limit, to limit the page-size and ensuring requests are distributed across Domain Controllers, instead of being targeted at one single Domain Controller, because it is the only Domain Controller that knows where pages begin and end.
Return highest change stamp applied as part of an update (1.2.840.113556.1.4.2205)
This LDAP control is similar to searchStats control in that when checked in, it causes the result to contain additional data. This data is the invocationID and highest USN allocated during the transaction (in BER encoded format).
This LDAP control is useful for programmatically determining convergence between any two instances, immediately following an update.
Expected entry count (1.2.840.113556.1.4.2211)
This LDAP control requires a minimum and maximum value. This control is useful for uniqueness, in-use and/or absence checking. Of course, this new LDAP Control is very powerful when combined with the new Batched extended-LDAP operations LDAP Control.
New features in Active Directory Domain Services in Windows Server 2012, Part 16: Active Directory-based Activation
About volume activation
While Windows XP volume activation was straightforward, Microsoft felt its volume license product keys were misused in less legal situations; according to the Volume Activation 2.0 FAQ available here, "Volume License keys represent the majority of keys involved in Windows piracy". Back then, Windows XP volume license keys did not require activation by Microsoft-owned servers.
With Windows Vista, Microsoft introduced Volume Activation 2.0. Within this program, Microsoft made every customer either check every product key with Microsoft's hosted activation services (Multiple Activation Keys) or make a KMS host report on product key usage for an entire organization.
About KMS
Key Management Services (KMS) is an on-premises server-client model for volume activation. KMS clients use DNS to find KMS hosts and communicate with them using TCP port 1688 (by default). The choice of KMS host Operating System and the (in)availability of KMS host Windows Updates determines the Windows, Windows Server and/or Office activation possibilities.
In some environments end users were faced with warnings of unlicensed software usage due to inactivity (a KMS client needs to connect to the KMS every 180 days). In other environments the initial activation count to use KMS was not reached or hosts in perimeter networks (DMZs) and/or isolated networks were not allowed to contact the KMS host(s). In both these environments admins had to resort to using MAKs.
About MAK
Multiple Activation Keys (MAKs) are used for one-time activations with Microsoft. Each Multiple Activation Key an admin can punch in has a predetermined number of allowed activations. This number is based on the volume licensing agreement. Each activation using a MAK with Microsoft's hosted activation service counts towards the activation limit.
There are two ways to activate computers using MAK: MAK Independent and MAK Proxy activation. MAK Independent activation requires that each computer independently connect and activate with Microsoft, either over the Internet or by telephone. MAK Proxy activation enables a centralized activation request on behalf of multiple computers with one connection to Microsoft. MAK Proxies are useful for environments that do not maintain a (transparent) connection to the Internet. MAK Proxy activation is configured using the Volume Activation Management Tool (VAMT).
About VAMT
The Volume Activation Management Tool (VAMT) is a free tool that admins can download and use to centrally alter the volume activation method and product key for clients.
What's New
Windows Server 2012 introduces the concept of Active Directory-based Activation does.
Automatic activation with domain membership
In environments with Active Directory-based Activation configured, when you join a Windows computer to an Active Directory domain, the Windows and/or Office installations on that computer will automatically activate. Activation is valid for 180 days. During this time the client should communicate with a Domain Controller at least once to renew the activation period.
When you remove a computer from the domain, the Windows and/or Office installations immediately get deactivated.
No activation threshold
Where KMS requires 25 (physical) Windows installations or 5 Windows Server installations to begin centrally activating, Active Directory-based Activation does not have a minimum amount of clients or servers to activate.
No host maintenance needed anymore
Although KMS is the preferred volume activation method in environments with more than 25 (physical) clients or five servers, one of the downsides of KMS is the necessity for a KMS host. While the KMS host can coexist with any other Server Role, it adds extra management tasks.
With Active Directory-based Activation no single physical computer is required to act as the activation object, because the activation information is stored in the ms-SPP-Activation object in Active Directory.
Automatic high availability and failover
Since Active Directory-based Activation uses Active Directory Domain Controllers for client-server activation communications, each (R/W) Domain Controller is an available activation host. As an admin you no longer need to manually configure secondary KMS Host DNS records (only the first KMS Host registers the DNS records).
No more dedicated KMS Port
Where KMS used TCP port 1688 (by default) for client-server communication, Active Directory-based Activation uses commonly used Active Directory client-server communication ports.
Works together with KMS
Environments with Active Directory-based Activation are not limited to using only Active Directory-based Activation to activate Windows, Windows Server and/or Office installations. On the same network, also KMS can still be used. This is useful to activate previous versions of Windows, Windows Server and/or Office that do not support Active Directory-based Activation.
Windows 8 and Windows Server 2012 installations will first try to activate through Active Directory-based Activation (act-type 1). When unsuccessful, these installations will try to activate through Key Management Services (act-type 2) and when unsuccessful again will try token-based activation (act-type 3)
Enabling AD-based Activation
Installing AD-based Activation
Installing Active Directory-based Activation requires installing the Volume Activation Services Server role. This can be done in two ways:
Using Server Manager
To install the Volume Activation Services role using Server Manager perform the following steps:
- In Server Manager click on Manage in the top right.
- Click Add Roles and Features.
- Select Role-based or Feature-based installation in the Select installation type screen. Click Next >.
- Select a server to install the Volume Activation Services on from the server pool in the Select destination server screen. Click Next >.
- In the Select server roles screen, select Volume Activation Services.
- A pop-up appears with a selection of features that are required to manage the Windows Activation Services role. Click Add Features.
- In the Select server roles screen click Next >.
- In the Select features screen click Next >.
- Read the notes in the Volume Activation Services screen and click Next > when done.
- On the Confirm installation selections page, confirm that the information is correct, then click Install.
Using PowerShell
To install the Volume Activation Services, run the following PowerShell one-liner:
Install-WindowsFeature VolumeActivation –IncludeManagementTools
Configuring AD-based Activation
To configure Volume Activation Services, log on as an enterprise admin to the server on which you installed the Volume Activation Services Role. Then perform the following steps:
- Open Server Manager.
- Click on Tools in the upper right corner and then click Volume Activation Tools.
- In the Introduction to Volume Activation Services screen, click Next >.
- In the Select Volume Activation Method screen, select Active Directory-based Activation. Click Next >.
- Enter the KMS Host product key in the Manage Activation Objects screen. Optionally, specify a name for the Active Directory activation object. Click Next > when done.
- Choose to activate the key online or by phone in the Activate Product screen, and then click Commit.
- A pop-up appears, mentioning an Active Directory-based Activation object will be created. Click Yes.
- In the Activation Succeeded screen click Close.
This will create an activation object underneath CN=Activation Objects,CN=Microsoft SPP,CN=Services,CN=Configuration,DC=domain,DC=tld as shown below using ldp.exe:
After replication of the object and its attributes, Windows client installation, Office installations and Windows servers installations, covered by the KMS Host product key and configured with their corresponding setup keys, will activate automatically.
Tip!
Volume License (VL) downloads for Windows and Windows Server are automatically configured with their corresponding setup keys and thus require no changes for automatic activation by KMS or Active Directory-based Activation.
Reporting on activated licenses
To report on activated licenses and thus present proof of license compliance, Volume Activation Management Tool (VAMT) version 3 can be used.
Note:
VAMT 3.0 can only be installed on Windows 7, Windows 8, Windows Server 2008 R2 and Windows Server 2012 installations. PowerShell 3.0
Installing VAMT 3
To install Volume Activation Management Tool (VAMT) 3.0 begin by downloading the Windows Assessment and Deployment Kit (ADK) for Windows 8. When this 700KB download finishes, run it:
- In the Specify Location screen, click Next.
- Make the choice to participate or not in the Join the Customer Experience Improvement Program (CEIP) screen and click Next.
- Click on the Accept button in the License Agreement screen.
- Select to install the Volume Activation Management Tool (VAMT) and Microsoft SQL Server 2012 Express in the Select the features you want to install screen.
Tip!
Existing SQL installations can be used by the Volume Activation Management
Tool. When your environment already features a SQL Server and you want to
use this installation to host the VAMT database, you don't have to select the
Microsoft SQL Server 2012 Express option. (In this case specify the
hostname of the SQL Server during step 2 of configuring VAMT 3.)
Click Install when done. - The selected features will now be installed. After installation the Welcome to the Assessment and Deployment Kit! screen will appear. Click Close.
Configuring clients for reporting
By default, the Volume Activation Management Tool (VAMT) will not be able to communicate with activation clients. It uses WMI. By default, this type of traffic is blocked by the Windows Firewall. To enable it, follow these steps:
- Log on to a Domain Controller, or a management workstation with the Remote Server Administration Tools (RSAT) installed, with sufficient permissions to create and/or modify group policies.
- Start the Group Policy Management Console (GPMC) by either typing its name in the Start Menu or Start Screen, or by running gpmc.msc.
- Either create a new Group Policy object and target it at the hosts you want to report on, or select an existing Group Policy object that already targets them. Right-click the Group Policy object and select Edit… from the context menu. This will launch the Group Policy Editor.
- Navigate to Computer Configuration, Policies, Windows Settings, Security Settings and then expand the Windows Firewall with Advanced Security node twice.
- Right-click Inbound Rules and select New Rule… from the context menu.
- In the Rule Type screen, select Predefined as the rule type and select Windows Management Instrumentation (WMI) from the pull-down list. Click Next >.
- In the Predefined Rules screen, select only the rule ending on (WMI-in) and click Next > to enable the three predefined WMI rules.
- In the Action screen, select Allow the connection and click Finish.
- Now the rule will be present in the Inbound Rules node of Windows Firewall with Advanced Security. Modify the rule when you want these rules only to apply when the machine with VAMT on it connects, or when you only want this rule to apply to particular profiles (domain, private and/or public)
- Close the Group Policy Editor when done.
Wait for the Group Policy Background Refresh Interval to update the Group Policies on each of the targeted domain members (by default this may take up to 120 minutes) or use the central Group Policy Update… command to trigger updating of Group Policy objects by right-clicking the targeted Organizational Units (OUs) and selecting this option from the context menu.
Configuring VAMT 3
With the Volume Activation Management Tool (VAMT) installed, it's time to start it up for the first time and configure its basic settings:
- Start the Volume Activation Management Tool by pressing Start and typing VAMT. Then, click the shortcut to the Volume Activation Management Tool in the results pane.
- Since this is the first time the Volume Activation Management Tool (VAMT) is started it displays the Database Connection Settings screen. Specify database settings and then click Connect. If unsure what to specify as settings, simply specify .\ADK as Server:, <Create new database> as Database: and give the new database a meaningful name. (in the example above I named the database VAMT30.)
- In the left pane of the Volume Activation Management Tool interface, right-click Products and select Discover products… from the context menu.
- In the Discover Products screen, Search for computers in Active Directory is selected by default.
Tip!
The Volume Activation Management Tool does not offer an option to filter on
Organizational Units. To make the computer name filter in the Volume Activation
Management Tool work, implement a useful naming convention.
Click Search when done.
- When discovery completes, the Volume Activation Management Tool will display the number of machines it found. Click OK.
- Select Products in the left pane of the Volume Activation Management Tool interface. In the list of products in the middle of the screen (multi)select the products you want to see the license status of. Right-click the selection and select Update license status and Current credential from the context menu.
- Click Close when done.
- In the Volume Activation Management Tool interface, now click the root node in the left pane. In the middle pane a license summary is shown:
You can drill down in this summary.
Exporting the summary is available in the right action pane.In the Volume Licensing Reports folder in the left pane, more advanced reports are available.
Requirements
To use Active Directory-based Activation the following requirements need to be met:
- Active Directory-based Activation requires the Windows Server 2012 schema extensions. This means adprep.exe needs to have been run.
- Active Directory-based Activation requires a domain-joined Windows 8-based management workstation with the Windows Volume Activation Remote Server Administration Tools (RSAT) installed, or a domain-joined Windows Server
2012-based management server.- When volume license reporting is needed, the management workstation/server needs to be installed with the Volume Activation Management Tool (VAMT) version 3.0 installed. VAMT should be run with credentials that have administrative permissions to the Active Directory domain and the KMS host Key should be added to VAMT in the Product Keys node.
- When proxy activation is needed for isolated environments, the management workstation/server needs to be installed with the Volume Activation Management Tool (VAMT) version 3.0 installed. VAMT 3.0 is part of the Windows Assessment and Deployment Kit (ADK) for Windows 8. The management workstation/server will be the only machine that will need an Internet connection. Click here for instructions on setting up Proxy Activation for Active Directory-based activation.
Note:
VAMT 3.0 can be installed on Windows 7, Windows 8, Windows Server
2008 R2 and Windows Server 2012, but requires PowerShell 3.0 and a
connection to a SQL Server (Express) database.
- Active Directory-based Activation will not work for operating systems earlier than Windows Server 2012 or Windows 8. It also will not work with Microsoft Office 2010. Use KMS volume activation to activate Windows clients and applications that do not support Active Directory-based Activation.
- Supported clients will only communicate to (R/W) Domain Controllers. Activation through Read-only Domain Controllers is not possible. Make sure sufficient (R/W) Domain Controllers are available to clients in remote locations.