31 Mar 2013

Windows Server 2012 Remote Access

Windows Server 2012 brings us many new features and capabilities that make it a great remote access solution for businesses of all sizes. Microsoft's remote access has been evolving for a long time. In Windows Server 2008 R2, we saw the introduction of the DirectAccess server role, by which you could have all your domain joined machines automatically connect to the corporate network without requiring users to establish a VPN connection. However, a problem with DirectAccess in Windows Server 2008 R2 was that it was primarily an enterprise solution because of its multiple requirements and complexity. SMBs were left out of the party. In addition, if you really wanted to deploy a remote DirectAccess solution, the Windows Server only version of DirectAccess wasn't very viable. Therefore, you needed to deploy DirectAccess using the Microsoft Unified Access Gateway or UAG – adding more complexity and expense.

With Microsoft's apparent lack of focus on TMG (on which UAG depends) and UAG itself in recent years, it didn't make much sense for them to continue requiring that DirectAccess have a dependency on either of these server applications. The good news is that all the new and improved features that people have been wanting from DirectAccess are included with Windows Server 2012. Many of the features that were available only when you used UAG DirectAccess are now available in the Windows Server out of the box DirectAccess solution. In addition, Windows Server 2012 DirectAccess has been reworked so that all businesses, large and small, can choose deployment options that fit their level of networking sophistication.

But the Windows Server 2012 remote access solution isn't only about DirectAccess. It simply brings DirectAccess into the remote access management fold and makes it an integrated part of the Windows Server 2012 routing and remote access solution. This means that the remote access VPN, site to site VPN and DirectAccess options are now all part of the new Remote Access Server role.

New features

Some exciting new features that you'll find in the Windows Server 2012 Remote Access Server Role include the following:
  • DirectAccess and RRAS management integration. You can now manage DirectAccess and the other VPN based remote access services from the same interface.
  • Simplified DirectAccess management for small and medium organizations. DirectAccess was a complex and unwieldy beast as implemented in Windows Server 2008 R2. Requirements for two consecutive IP addresses, IPv6 requirements and other prerequisites made it unrealistic for most small companies to deploy DirectAccess. These requirements have been removed in the Server 2012 version and now anyone who is connected to the Internet through a machine that can allow inbound connections to TCP port 443 can benefit from DirectAccess.
  • Removal of PKI deployment as a DirectAccess prerequisite. The PKI and certificate configuration requirements for the Windows Server 2008 R2 DirectAccess deployment were complex and confusing – and they often led to long and difficult troubleshooting exercises. With the Windows Server 2012 DirectAccess, you don't necessarily need certificates or PKI, thanks to the ability to use Kerberos constrained delegation.
  • Built-in NAT64 and DNS64 support for accessing IPv4-only resources. In the Windows Server 2008 R2 version of DirectAccess, you had to have an IPv6 capable intranet in order to get the most out of DirectAccess. This problem was fixed if you deployed UAG because it had the NAT64/DNS64 services, which removed the requirement that you have an IPv6 capable network on your intranet. With Windows Server 2012, these two key services are made part of the Windows Server 2012 platform, so you don't have to add any other product to get DirectAccess working on your IPv4 only network.
  • Support for DirectAccess server behind a NAT device. A major deployment blocker for both large and mid-sized businesses was the fact that you could not deploy a DirectAccess server behind a NAT device. Large enterprises required that the DirectAccess server be located behind a firewall, and mid-sized enterprises didn't have enough public IP addresses to go around. With the Windows Server 2012 DirectAccess solution, you can now put the DirectAccess server behind a NAT device – thus removing one of the most common deployment blockers for DirectAccess.
  • Simplified network security policy. The Windows Server 2008 R2 DirectAccess solution used a very complex set of security rules with IPsec to create multiple types of IPsec connections that served different purposes. This made DirectAccess difficult to deploy and even more difficult to troubleshoot when something went wrong. The Windows Server 2012 DirectAccess greatly simplifies the network security policies, which makes it easier to deploy and troubleshoot.
  • Load balancing support. High availability is a key requirement for any remote access solution. The reason for this is that often the remote access solution is going to be used heavily when there is some sort of event that prevents users from coming into the office. In that case, if the remote access solution is not highly available, users will not be able to get any work done that day, and of course that is a highly undesirable situation! In the Windows Server 2008 R2 DirectAccess solution, the out of the box support for high availability wasn't very compelling. In order to get genuine high availability with DirectAccess at that time, you had to deploy UAG. Now with the Windows Server 2012 DirectAccess solution, Network Load Balancing support for DirectAccess servers is made available to you right out of the box.
  • Support for multiple domains. While multiple domain support was available in the Windows Server 2008 R2 DirectAccess solution, it was difficult to get set up and was overall considered to be a somewhat "fragile" solution. In the Windows Server 2012 DirectAccess deployment, support for multiple domains is a lot easier to set up and troubleshoot.
  • NAP integration. Network Access Protection (NAP) is a type of network access control technology that requires each DirectAccess client computer to prove its security status before being allowed onto the network. If the DirectAccess clients fail the security check, they are not allowed on the network through DirectAccess. NAP integration with DirectAccess was not available in the Windows-only version of DirectAccess so that once again, you had to bring in UAG to get NAP support. Now with Windows Server 2012 DirectAccess, you get support for NAP right out of the box.
  • Support for OTP (token based authentication). OTP (one-time password) using OAuth enables you to require users to present a one-time password to the DirectAccess server to authenticate the user to the DirectAccess server. This was not previously available in the Windows Server only version of DirectAccess so you had to bring in UAG to get OTP support (are you seeing a pattern here?). With Windows Server 2012, you get support for OTP right out of the box.
  • Automated support for force tunneling. Force tunneling is a configuration option in DirectAccess whereby you can force all network connections to go through the DirectAccess connection. If DirectAccess clients need to connect to corporate resources, those connections have to go through the DirectAccess tunnel. If DirectAccess clients want to connect to the Internet, then those requests also have to go through the DirectAccess tunnel. Force tunneling is the opposite of split tunneling, whereby you enable the clients to use the DirectAccess connection to connect to corporate resources and when the DirectAccess clients want to connect to Internet resources, they connect to them directly through whatever Internet connection they might already have. The split tunneling issue was a concern in the late 1990s and early-mid 2000s, but is much less of a concern today. The default configuration for DirectAccess in Windows Server 2008 R2 was to allow for split tunneling, and this led to a minor revolt among remote access admins. Therefore, guidance was created to enable force tunneling on the Windows-only version of DirectAccess, but that guidance was complex and difficult to follow. UAG made the process a lot simpler and that simplicity is another UAG feature that has now been integrated into the Windows Server 2012 DirectAccess implementation.
  • IP-HTTPS interoperability and performance improvements. IP-HTTP is an IPv6 transition protocol that enables DirectAccess clients on an IPv4 Internet to connect to the DirectAccess server. The problem with the previous version of the IP-HTTPS protocol was that it carried IPsec connections over a TLS transport, which significantly increased the encryption overhead for IP-HTTPS connections. That is the reason some folks referred to IP-HTTPS as the IPv6 transition protocol of last resort. However, the IP-HTTPS protocol was the preferred method for administrators, since it only required that TCP port 443 be open from end to end between DirectAccess client and server. Microsoft realized this and made changes to the IP-HTTPS protocol so that it would have better performance and be easier to manage.
  • Manage-out support. Manage-out refers to the ability to connect to DirectAccess clients from hosts on the corporate network. This makes it possible for Help Desk personnel to connect to DirectAccess clients so that they can make changes that are necessary. In addition, manage-out enables corporate IT to manage all the machines (OS updates, etc.) at all times, whether or not they are located on the corporate network. In fact, many IT organizations find that manage-out is the primary value of DirectAccess and enable only this feature, and do not enable inbound connections to the intranet through DirectAccess. Manage-out has been improved and simplified in the Windows Server 2012 DirectAccess solution.
With the recent announcement about the discontinuation of the TMG firewall, the topic of Windows Server 2012 networking has become even more interesting to TMG firewall admins and Windows Network admins of all stripes. For a long time, the TMG firewall (like ISA Server before it) was a cornerstone of Windows Networking and provided the security and the flexibility you needed to make sure your remote access solution worked securely and reliably. Now that we know the TMG firewall will eventually be relegated to the dustbin of history (after extended support ends in 2020), it's time to start thinking more about what the Windows platform itself has to offer you when it comes to remote access.

As you saw in Part 1 of this article series, the focus for remote access in the Windows Server 2012 era is the new unified remote access solution, which includes support for remote access VPN client connections, site to site VPN connections and DirectAccess. Let' complete the discussion we started in Part 1, and then talk about the implications of the new features.

Multisite support

If you worked with the UAG DirectAccess solution, you know that one of the most difficult scenarios is the one that involves multiple sites. Standing up a single UAG DirectAccess server is relatively easy if you have all the pieces in place in advance to make sure that when you do set up the UAG DirectAccess server or array it will be able to consume all the supporting services. The problems start when you try to set up two or more arrays at different sites. There are a number of reasons for this: problems with group policy configuration, DNS issues, and IPv6 problems. Tom put together a solution that would enable you to stand up a two-site UAG DirectAccess solution a couple of years ago and was in the process of completing a proof of concept setup, but he was moved to another team within Microsoft and never was able to finish that work. However, Microsoft realized that enabling multi-site connectivity for DirectAccess was a key enterprise requirement, so in Windows Server 2012, there is now full support for multi-site DirectAccess deployments.

In Windows Server 2012, you can configure DirectAccess so that Windows 8 clients will be able to discover the closest DirectAccess server or array and connect to that. Or, you can configure the clients to use a preferred DirectAccess server or array or you can let your users choose which DirectAccess server or array to use. For Windows 7 clients, you don't have this level of automation or flexibility, but users can still benefit from many of the same connectivity advantages if you choose to deploy an external Global Server Load Balancing Solution. However, this is something you will need to configure outside of Windows, since Windows Server 2012 itself doesn't provide this feature.

Support for Server Core

If you've worked with server core in the Windows Server 2008 R2 era, you probably think that you'd prefer almost any onerous task to working with a Server Core deployment. It was clear that Microsoft had not fully worked out the configuration issues when working with Server Core in the past, but it looks as if, with Windows Server 2012, the Server Core deployment is now actually manageable from a remote management workstation. The new RSAT tools and the Server Manager application that comes with Windows Server 2012 make it possible to relatively easily manage machines that are running services on a Server Core installation of Windows Server 2012.

There are a lot of advantages to running your DirectAccess servers using Server Core. The attack surface is much lower and you don't need to update the servers nearly as often. These are both great advantages when you need to provide a highly available service, such as the DirectAccess remote access solution. You can manage DirectAccess using remote PowerShell commands or you can use the new and improved Server Manager. Another nice thing about using a Server Core installation is that you can have more confidence in putting the Windows Server 2012 DirectAccess server at the edge of the network, even if you no longer have to do that as you did with the UAG DirectAccess solution.

Windows PowerShell support

Love it or hate it, PowerShell is here to stay. There are a number of reasons that Microsoft is making such a large investment in PowerShell and so if you want to continue being a Microsoft administrator, you'll probably have to learn at least some PowerShell eventually. Thus, the new DirectAccess solution in Windows Server 2012 includes full support for PowerShell, so that you will be able to do whatever it is that people do with PowerShell :).

User monitoring

All remote access solutions must support robust monitoring of who is connecting to the network and what they're doing while they're there. While the Windows Server 2012 iteration of DirectAccess doesn't provide nearly the level of detail you are able to get with the TMG firewall, it does have a much better level of monitoring than you could get with the UAG DirectAccess solution.

In Windows Server 2012, there is a new monitoring console that provides you with a great deal of useful information about the users who are connecting through the DirectAccess connection. The following is a list of what is exposed in the monitoring console, which is information that is obtained from Performance Monitor counters:
  • Total number of active remote clients connected: this includes both DirectAccess and VPN remote clients. It lets you know how many concurrent connections there are for both DirectAccess and VPN connections, which is helpful in determining what your max is going to be, given your level of hardware.
  • Total number of active DirectAccess clients connected: this shows only the total number of clients connected using DirectAccess. This is good if you want to know only who's connected via DirectAccess.
  • Total number of active VPN clients connected: This shows only the total number of clients connected using a VPN. As with the previous counter, this is good if you want to know only the VPN connections.
  • Total unique users connected: This includes both DirectAccess and VPN users, based on the active connections. I'm not sure how this is different from the total number of remote access clients connected. Maybe a single user who has more than one connection is not considered unique
  • Total number of cumulative connections: This shows the total number of connections that have been serviced by the Remote Access Server since the last server restart. This might be useful for reporting purposes or maybe for MCSE cocktail parties, when you want to brag about how many connections you supported before having to restart
  • Maximum number of remote clients connected: This shows the maximum number of concurrent remote users connected to the Remote Access Server since the last server restart. This is useful information for determining whether you can increase the number by adding more clients to the server or array in an attempt to make it topple over.
  • Total data transferred: This is the total amount of inbound and outbound traffic passing through the Remote Access Server for both DirectAccess and VPN since the last server restart. It's interesting information, especially if you have a metered bandwidth service and need to figure out trends in usage so that you can plan for upgrading your ISP service.
    1.Inbound traffic – Total bytes/traffic into the remote access server/gateway
    2.Outbound traffic – Total bytes/traffic out of the remote access server/gateway

You can find information on individual users and discover what resources they're trying to connect to on the internal network. You can also filter the list of connected users so that you can easily find the one(s) that you're interested in. In addition, you can filter on one more of the following fields:

Field Name

Value

Username

The user name or alias of the remote user. Wildcards may be used to specify a group of users.

Hostname

The computer account name of the remote user. An IPv4 or IPv6 address can be specified as well.

Type

Either DirectAccess or VPN. If DirectAccess is selected, then all remote users connecting using DirectAccess are listed. If VPN is selected, then all remote users connecting using VPN are listed.

ISP address

The IPv4 or IPv6 address of the remote user

IPv4 address

The inner IPv4 address of the tunnel connecting the remote user to the corporate network

IPv6 address

The inner IPv6 address of the tunnel connecting the remote user to the corporate network

Protocol/Tunnel

The IPv6 transition technology used by the remote client, i.e., Teredo, 6to4 or IP-HTTPS in case of DirectAccess users, and PPTP, L2TP, SSTP or IKEv2 in case of VPN users

Resource Accessed

All users accessing a particular corporate resource or an endpoint. The value corresponding to this field is the hostname/IP address of the server/endpoint

Server

The Remote Access server to which clients are connected. This is relevant only for cluster and multisite deployments.


Server monitoring

If you've worked with DirectAccess in the past, you know that there are a number of services and technologies that comprise the entire working solution. You also know that it's hard to keep track of all of them! In Windows Server 2012, there is a server and service monitor console that helps you figure out whether everything is as it should be, and if it's not, it tries to tell you what is wrong.

In Windows Server 2012, there are status monitors for the following services:

    - 6to4
    - DNS
    - DNS64
    - Domain controller
    - IP-HTTPS
    - IPsec
    - ISATAP
    - Kerberos
    - Management Servers
    - NAT64
    - Network Adapters
    - Network Location Server
    - Network Security (IPsec DoSP)
    - Services
    - Teredo
    - Load Balancing
    - VPN addressing
    - VPN connectivity

Diagnostics

Troubleshooting DirectAccess has always been both an art and a science. The art came from needing the experience with DirectAccess in order to really understand where to start when it came to troubleshooting. Now with Windows Server 2012, we have new capabilities that will make troubleshooting easier. These include:

     - Detailed event logging for DirectAccess
    - Improved tracing and packet capture using a single click
    - Log correlation, which makes it easier to interpret events in the trace file
    - Flexible viewing options for logs

Site-to-site IKEv2 IPsec tunnel mode VPN

Support for IKEv2 on site to site VPNs is now available. This was available for remote access VPN client connections in the past and has now been extended to site to site VPN connectivity.

QoS in Windows Server 2012

As organizations begin to depend more heavily on cloud services, network bandwidth management becomes even more critical. Thankfully, bandwidth can be managed through a Windows component known as Quality of Service (QoS). In this article, I will explain what QoS is, how it works, and what you need to know about using QoS in Windows Server 2012.
An Introduction to QoS

Although the main focus of this article series will be on using QoS with Windows Server 2012, there are two important things that you need to know right off the bat. First of all, QoS is not new to Windows Server 2012. Microsoft first introduced QoS over a decade ago when it debuted in Windows 2000. Of course Windows support for QoS has been modernized in Windows 2012.

The other thing that I want to clear up right away is the notion that QoS is a Microsoft technology. Even though QoS is built into the Windows operating system (and has been for quite some time) it is an industry standard rather than a Microsoft technology. Microsoft has a long history of including industry standards in Windows. For example IPv4 and IPv6 are also industry standard networking protocols that are included in the Windows operating system.

So with that out of the way, I want to go ahead and talk about what QoS is. In order to really understand what QoS is and why it is important, you have to consider the nature of networks in general. Without a mechanism such as QoS in place, most networks use what is known as best effort delivery. In other words, when a computer sends a packet to another computer, the two machines and the networking hardware between them will make an earnest attempt to deliver the packet. Even so, delivery is not guaranteed. Even if the packet does make it to its destination, there are no guarantees as to how quickly it will get there.

Often times the delivery speed is based on the network speed. For example, if a packet is being sent between two PCs that reside on the same gigabit network segment then the packet will most likely be delivered very quickly. However, this is anything but a guarantee. If anything, the network speed (1 gigabit in this example) can be thought of as an unbreakable speed limit rather than a guarantee of fast delivery. Of course having a fast network certainly increases the chances that a packet will be delivered quickly, but there are no guarantees. An application that consumes a lot of bandwidth has the potential to degrade performance for every other device on the network.

This is where QoS comes into play. QoS is essentially a set of standards that are based on the concept of bandwidth reservation. What this all boils down to is that network administrators are able to reserve network bandwidth for mission critical applications so that those applications can send and receive network packets in a reasonable amount of time.

It is important to understand that although QoS is implemented through the Windows operating system, the operating system is not the only component that is involved in the bandwidth reservation process. In order for QoS to function properly then each network device that is involved in communications between two hosts (including the hosts themselves) must be QoS aware. This can include network adapters, switches, routers, and other networking hardware such as bridges and gateways. If the traffic passes through a device that is not QoS aware, then the traffic is dealt with on a first come, first serve basis just like any other type of traffic would be.

Obviously not every type of networking supports QoS, but Ethernet and Wireless Ethernet do offer QoS support (although not every Ethernet device is QoS aware). One of the best networking types for use with QoS is Asynchronous Transfer Mode (ATM). The reason why ATM works so well with QoS is because it offers connection oriented connectivity. When QoS is used, ATM can enforce the bandwidth requirements at the hardware level.

Before move on, I want to clear up what might seem like a contradiction. When I talked about Ethernet, I said that Ethernet supports QoS, but that the underlying hardware must be QoS aware. Even so, Ethernet does not enforce QoS at the hardware level the way that ATM does. So what gives?

The reason why Ethernet does not enforce QoS at the hardware level is because Ethernet is a very old networking technology that has been retrofit many times over the last couple of decades. The concept of bandwidth reservation did not exist when Ethernet was created, and bandwidth reservation at the hardware level just does not work with the existing Ethernet standard. That being the case, QoS is implemented at a higher level in the OSI model. The hardware does not perform true bandwidth reservation, but rather emulates bandwidth reservation through traffic prioritization based on the instructions provided by QoS.

Additional Considerations

Although I have given you an overview of what is required for implementing QoS, there are a few other considerations that should be taken into account. For starters, Windows Server 2012 does not impose any bandwidth requirements that would keep you from using QoS in certain situations. Even so, Microsoft states that QoS works best on 1 gigabit and 10 gigabit network adapters.

Presumably the main reason behind Microsoft's statement is that adapters that operate at speeds below a gigabit simply do not provide enough bandwidth to make bandwidth reservation worthwhile.

I might be reading too much into Microsoft's recommendation, but there is something that I just can't help but notice. Microsoft said that QoS works best on 1 gigabit or 10 gigabit adapters – not connections. Although this might at first seem trivial, I think that Microsoft's wording is deliberate.

One of the new features in Windows Server 2012 is NIC teaming. NIC teaming will allow multiple network adapters to work together as one in order to provide higher overall throughput and resilience against NIC failure. I have not seen any official word as to whether or not NIC teaming will work with QoS, but I would be very surprised if Microsoft did not allow the two features to be used together.

One last thing that I want to quickly mention about QoS is that it is designed for traffic management on physical networks. As such, Microsoft recommends that you avoid using QoS from within a virtual server. However, QoS can be used on a physical server that is acting as a virtualization host.

I explained that Quality of Service (QoS) is a networking standard, and that Microsoft has offered QoS support within the Windows operating system since Windows 2000. That being the case, it is easy to dismiss Windows Server 2012's support for QoS as being nothing more than a legacy feature that is still being supported. However, QoS has evolved to meet today's bandwidth reservation related needs.

Legacy Bandwidth Management

In order to truly appreciate how QoS has been improved in Windows Server 2012, you have to understand some of the QoS limitations in previous versions of the Windows Server operating system. In the case of Windows Server 2008 R2, QoS could only be used to enforce maximum bandwidth consumption. This type of bandwidth management is also sometimes referred to as rate limiting.

With careful planning it was often possible to achieve effective bandwidth management even in Windows Server 2008 R2. However, in the case of Hyper-V it was impossible to achieve granular bandwidth management for an individual virtual machine.

Granular Bandwidth Management

The reason why granular bandwidth management is so important within a virtual datacenter is because virtual machines produce at least four different types of traffic. Limiting bandwidth consumption for all four types of network traffic in a consistent way can sometimes be counterproductive.

To show you what I mean, here are the four main types of network traffic that can be produced by virtual machines in a Hyper-V environment:
  • Normal network traffic – This is network traffic that flows between the virtual machine and other servers or workstations on the network. These machines can be both physical and virtual.
  • Storage traffic – This is the traffic that is generated when virtual hard disk files reside on networked storage rather than directly on the host server that is running the virtual machine.
  • Live migration traffic – This is the traffic that is created by the live migration process. It typically involves storage traffic and traffic between two host servers.
  • Cluster traffic – There are several different forms of cluster traffic. Cluster traffic can be the traffic between a cluster node and a cluster shared volume (which is very similar to storage       traffic). It can also be inter-node communications such as heart beat traffic.

The point is that network traffic within a virtual datacenter can be quite diverse. Because of this, the type of bandwidth management provided by QoS in Windows Server 2008 R2 simply does not lend itself well to virtual datacenters.

There are two reasons why the concept of bandwidth rate limiting doesn't work so well for virtual machines. For one thing, limiting a virtual machine to using a certain amount of bandwidth might lead to unnecessary performance problems. Suppose for instance that a host server had a 10 gigabit connection and you limited a particular virtual machine to consuming 1 gigabyte of bandwidth. By doing so, you could prevent the virtual machine from robbing bandwidth from other virtual machines, but you also prevent the virtual machine from using surplus bandwidth. Imagine for instance that at a given point in time there were seven gigabits of available bandwidth, but the virtual machine was only able to use one gigabit even though it could benefit from additional bandwidth and the additional bandwidth could be provided at that moment without taking anything away from other virtual machines.

Of course the opposite is also true. Without proper planning, limiting bandwidth can lead to bandwidth deprivation for specific virtual machines. Suppose for example that a host server is running twelve virtual machines and that those virtual machines all share a single, ten gigabit network adapter. Now let's suppose that you were to configure each virtual machine so that it can never consume more than 1 gigabit of network bandwidth.

Given the fact that the host server is running twelve virtual machines, the server's bandwidth has actually been over committed at that point. During a period of high demand, each virtual machine will try to use up to 1 gigabit of network bandwidth. Because the physical hardware cannot provide a full twelve gigabits of bandwidth, some of the virtual machines could end up suffering from poor performance because they are unable to get the bandwidth that they need.

QoS in Windows Server 2012

As I previously explained, the Windows Server 2008 R2 implementation of QoS isn't exactly a bandwidth reservation system (even though QoS is technically a bandwidth reservation protocol). Instead, it can be thought of more as a bandwidth throttling solution. In other words, Windows Server 2008 R2's QoS implementation allows an administrator to dictate the maximum amount of bandwidth that a virtual machine can consume. This is similar to the technology that Internet Service Providers (ISPs) use to offer various rate plans. For example, my own ISP offers a 7 megabit, ten megabit, and a fifteen megabit package. The more you pay, the faster the Internet connection that you get.

Even though the concept of bandwidth throttling still exists in Windows Server 2012, Microsoft is also introducing a concept known as minimum bandwidth. Minimum bandwidth is a bandwidth reservation technology that makes is possible to make sure that various types of network traffic always receive the bandwidth that they need. This is really what QoS was designed for in the first place.

Obviously the biggest benefit to using this approach is that the concept of minimum bandwidth makes it possible to reserve bandwidth in a way that ensures that each virtual machine receives enough bandwidth to do its job. However, that is not the only benefit.

A second benefit is that Windows Server 2012 will make it possible to differentiate between the various types of network traffic that are produced by virtual machines. For example, an administrator could theoretically reserve more bandwidth for storage traffic than for regular virtual machine traffic.

Arguably the greatest benefit however, is that minimum bandwidth reservations are different from bandwidth caps. Although it is still possible (and sometimes necessary) to set bandwidth caps, minimum bandwidth settings do not cap bandwidth consumption.

Let's assume for example that you wanted to reserve 30% of your network bandwidth for virtual machine traffic, and the remaining 70% of bandwidth for things like live migration and storage traffic. If you don't have any live migrations happening at the moment then you might not need any bandwidth for live migrations at all. It would be silly to lock up that bandwidth to prevent it from being used for other types of network traffic.

In this type of situation, the virtual machine traffic receives the 30% of the network bandwidth that has been reserved for it. If the virtual machine traffic could benefit from additional bandwidth at the moment and bandwidth is not presently being consumed by the other services that hold a reservation then that bandwidth is made available to virtual machine traffic until it is needed by one of the other traffic types in order to fulfill the minimum bandwidth reservation. Of course I am only using virtual machine traffic as an example. The concept applies to any type of traffic.

Policy Based QoS

In Windows Server 2012, QoS is implemented through the use of group policy settings. Microsoft refers to this as Policy Based QoS. You can access the QoS portion of the Group Policy Editor by navigating through the Group Policy Editor's console tree to Computer Configuration | Windows Settings | Policy Based QoS.

It is worth noting that a QoS policy can be created at either the computer level or at the user level (or both). It is generally preferred to implement QoS policies at the computer level.
Creating a QoS Policy

You can create a new QoS policy by right clicking on the Policy Based QoS container and selecting the Create New Policy command from the shortcut menu. When you do, Windows will launch the Policy Based QoS Wizard.

The next option gives you the opportunity to specify a DSCP value. DSCP is an acronym standing for Differentiated Services Code Point. In spite of its rather cryptic sounding name the DSCP value's job is actually quite simple. The value that is assigned here designates the policy's traffic priority.

You might have noticed in the that the DSCP field has a default value of zero. The DSCP can be set to a value ranging from zero to 63. The higher the value, the higher the traffic priority. Therefore, a default QoS policy has the lowest possible priority.

When you assign a DSCP value to a QoS policy, you are essentially creating a queue for outbound network traffic. By default the traffic passing through the queue is not throttled. QoS only limits the traffic when bandwidth contention becomes an issue. In those types of situations lower priority queues yield to higher priority queues.

In some situations it is possible that a high priority queue could choke out a lower priority queue if a large amount of traffic passes through the higher priority queue. Doing so implements a bandwidth cap that prevents the queue from consuming an excessive amount of bandwidth. The throttle rate can be specified in terms of either kilobits per second or megabits per second.

When you click Next, you are given the opportunity to specify the traffic stream to which the QoS policy should apply. Rather than requiring you to identify traffic by TCP/IP port numbers, the QoS policy is designed to be bound to specific applications.

If you want to bind the QoS policy to a specific application then all you have to do is specify the name of the application's executable. In some cases however, there might be multiple applications on a system that use duplicate executable file names even though the applications themselves are different. In those types of situations you can specify the path to the application. If a path is required then you should use environment variables (such as %ProgramFiles%) whenever possible.

Your other option for binding the new QoS policy to a traffic stream is to use the policy to regulate HTTP traffic. In doing so, you must specify a specific URL or domain. That way the QoS policy will only regulate traffic for that specific site or Web application rather than applying to all HTTP traffic. Of course if your goal is to put a bandwidth cap on Web browsing then you always have the option of binding the policy to Internet Explorer.

Often times QoS policies need to have a granular scope. Imagine for example that your goal was to regulate the traffic produced by applications on one specific server. The problem with doing so is that QoS policies are really nothing more than group policy settings. Therefore, if you were to simply create a QoS policy within the Default Domain Policy then that policy would apply to all of the computers (or users) in the domain.

You could resolve this problem by segmenting your Active Directory into a series of organizational units, but that gets complicated and can be difficult to manage.

Specifying a source address effectively limits the policy so that it only applies to the specified source computer. As an alternative you can specify an IP address range so that the policy will apply to a specific subnet rather than to an individual computer.

The destination option allows you to apply the policy only to traffic that is destined for a specific computer or a specific subnet. If you need even tighter granular control then you have the option of specifying both a source and a destination address so that only the traffic flowing between designated hosts is regulated.

Gives you tighter control over the types of traffic that are regulated. Here you can specify TCP and UDP port numbers for both the source and destination. You can use this as an alternative to specifying an application to which the policy should be bound.

28 Mar 2013

How to fix DNS servers on a Windows network

DNS server failures are some of the most serious types of failures that can occur on a Windows network. If DNS is not working, then the Active Directory will not work either. Furthermore, users may not be able to access resources on the local network or the Internet. If your clients experience these types of problems, they will most likely call on you for help.

Here are some simple techniques for troubleshooting a DNS server failure.

Is the DNS server really to blame?

I have fixed a number of DNS problems over the years and very few have actually been related to failures on the DNS server. More often than not, the problem existed on the machine that was trying to perform the DNS query, rather than on the DNS server itself. Fortunately, there are some quick tests that you can use to narrow down the problem.

First, confirm the DNS server's IP address and that the DNS server service is running. Once you verify these two things, you can get started with the process of troubleshooting the DNS server failure.

I like to start out by making sure that the client machine is pointed to the correct DNS server. The easiest way to do this is to open a command prompt window and enter the following command:

IPCONFIG /ALL

This command will list the computer's TCP/IP configuration.

You can get the same information through the computer's network configuration screens, but I prefer to use this method because I have run into a couple of instances where the information that Windows showed did not match the configuration that Windows was actually using.

Upon displaying the machine's TCP/IP configuration, verify that the computer is pointed to the correct DNS server. For example, if you look at Figure A, you can see that my computer is pointed to a DNS server with an IP address of 147.100.100.34.

Verify that the machine's TCP/IP is configured to use the correct DNS server.

Assuming that the configuration is correct, the next thing I recommend doing is pinging the DNS server. This will verify that the client's machine is actually able to communicate with the DNS server. Keep in mind, though, that if the DNS server's firewall is configured to block ICMP traffic then the ping will not be successful.

Once you have verified that the client can communicate with the DNS server, it's time to see if the DNS server is able to resolve host names. The easiest way to do this is to test the IP address of a familiar host name. For example, I know that my website uses the IP address 24.235.10.4. Therefore, if I run the NSLOOKUP command against www.brienposey.com, my DNS server should resolve www.brienposey.com to 24.235.10.4, as shown in Figure B.

NSLOOKUP verified the IP address of my website.

One more important thing to notice in Figure B is that Windows also verifies the IP address of the DNS server that was used to resolve the domain name. This IP address should match the one that is shown in Figure A.

What happens if the NSLOOKUP command returns an incorrect IP address for the target domain? Well, there are a couple of things that could have happened. One possibility is that the domain's IP address has changed, but the change has not yet been replicated to the DNS server. Another possibility is that malware has modified the contents of the DNS cache. Once Windows has resolved a domain name to an IP address, the name resolution is cached and kept on hand for a while so that Windows does not have to repeat the query each time the domain name needs to be used. If there is an invalid entry in the cache, then Windows will not be able to access the domain correctly.

Fortunately, it is easy to flush the DNS cache. To do so, just enter the IPCONFIG command followed by the /FLUSHDNS switch. If you are running Windows Vista, then this operation will require elevated privileges. You can get these privileges by right-clicking on the Command Prompt menu option and choosing Run As Administrator from the resulting shortcut menu.

Once you flush the DNS cache, try running NSLOOKUP once again. If the host name is still incorrect, then there are a couple different possibilities. For example, the DNS server may have lost connectivity to a root-level server. Another possibility is that there is an incorrect entry in the LMHOSTS file or in the Windows registry.

26 Mar 2013

Windows Server 2012 - The Basics #2

Renaming the Server

Although Windows Server 2012 automatically assigns each server a random name, administrators often like to change the name to something more meaningful. This is especially true in virtual server environments where it can become quite confusing if the server's computer name (the name used by Windows to identify the server) doesn't match the virtual machine name (the name displayed within the Hyper-V Manager).

To rename a server, move the mouse pointer to the lower left portion of the screen to reveal the Start tile. Right click on this tile and select the System command from the shortcut menu. Upon doing so, Windows Server 2012 will display the System dialog box, which is nearly identical to the version used in Windows Server 2008 and 2008 R2.

Image
You can use the System dialog box sheet to change the computer name.

Now click on the Change Settings link and Windows will display the System Properties sheet. Make sure that the Computer Name tab is selected and then click the Change button. Enter the new computer name and click OK. You will have to reboot the server in order for the change to take effect.

Assigning an IP Address to the Server

The process of assigning an IP address to a Windows Server 2012 server is very similar to the method used in Windows Server 2008 and 2008 R2. Begin the process by moving the mouse to the lower left corner of the screen to reveal the Start tile. Right click on the Start tile and then choose the Control Panel option from the resulting menu.

When the Control Panel opens, click on the Network and Internet link, shown in Figure B. Next, click on Network and Sharing Center, followed by Change Adapter Settings. 

Image
Click on the Network and Internet link.

At this point, Windows should display a series of network adapters. Right click on the adapter to which you want to assign an IP address and choose the Properties command from the shortcut menu. Scroll through the list of networking components and select the Internet Protocol Version 4 (TCP/IPv4) component and click Properties. You will now be taken to a screen that allows you to enter an IP address for the adapter. After doing so, click OK.

One thing that you need to know about IP address assignments in Windows Server 2012 is that you must be careful to only assign IP addresses to adapters that are not being used for other purposes. For example, if an adapter is being used by Hyper-V then the only component that should be enabled for that adapter is the Hyper-V Extensible Virtual Switch. You cannot assign an IP address directly to the adapter without causing problems. Instead, Hyper-V creates a virtual network adapter on the physical server. This virtual network adapter corresponds to the physical network adapter, and that is where you should make the IP address assignment.

The same basic concept applies to Windows 2012 servers that are using a NIC team. When you create a NIC team, you are binding multiple network adapters together into a logical network adapter. The Network Adapters screen displays the physical network adapters alongside the NIC team, as shown in Figure C. The only component that should be enabled on teamed physical adapters is the Microsoft Network Adaptor Multiplexor protocol. IP address assignments must be made only to the teamed NIC, not to individual NICs within the team.

Image
The Network Connections screen shows physical network adapters and NIC teams.

Joining a Server to a Domain

The process of joining a Windows Server 2012 server to an Active Directory domain is very similar to the method that I previously demonstrated for renaming a server. Keep in mind however, that before you can join a server to a domain, the server must be able to communicate with the domain. Specifically this means that the server's IP address configuration must reference the domain's DNS server. Otherwise, Windows will be unable to contact a domain controller during the domain join.

To join the server to a domain, move your mouse pointer to the lower, left corner of the screen and then right click on the Start tile. Select the System command from the Start tile's menu. When the System dialog box appears, click the Change Settings link, which you can see in Figure A. The server should now display the System Properties sheet. Make sure that the Computer Name tab is selected and then click the Change button. When Windows Displays the Computer Name / Domain Changes dialog box shown in Figure D, enter your domain name and click OK. Windows will locate the domain and then prompt you for a set of administrative credentials. When the domain join completes you will be prompted to restart the server.

Image
Enter your domain name and click OK.

Disabling Internet Explorer Enhanced Security Configuration

In Windows Server 2012, Microsoft uses a mechanism called Internet Explorer Enhanced Security Configuration to lock down Internet Explorer, thereby making it more or less unusable. The good news is that you can disable Internet Explorer Enhanced Security Configuration.

Before I show you how to accomplish this, I need to point out that Internet Explorer Enhanced Security Configuration is put in place for your protection. The Internet is not a riskless place, and it is possible to infest a computer with malware just by accidentally visiting a malicious Web site. Most security professionals agree that you should never use a Web browser directly from a server console.

While I certainly agree with the sentiment of these ideas, I find that I often need access to the Internet when I am setting up a new server. Often times I will need to download patches, drivers, etc. and Internet Explorer Enhanced Security Configuration gets in the way. What I usually do (and this is by no means a recommendation) is to disable Internet Explorer Enhanced Security Configuration, download anything that I need, and then re-enable Internet Explorer Enhanced Security Configuration.

To disable Internet Explorer Enhanced Security Configuration, open the Server Manager and then click on the Local Server tab. When you do, the console will display the local server properties. Click on the Unknown link next to IE Enhanced Security Configuration, as shown in Figure E.

Image
Click on the Unknown link next to IE Enhanced Security Configuration.

You will now see a dialog box that allows you to enable or disable this component. Internet Explorer Enhanced Security Configuration can be enabled or disabled separately for users and for administrators.

25 Mar 2013

Windows Server 2012 - The Basics #1

The Windows Server 2012 user interface tends to be confusing to those who have previously worked with Windows Server 2008 or 2008 R2.

In fact, many administrators initially find themselves having trouble performing even some of the most basic tasks because the interface is so different from what they are used to. That being the case.

Rebooting the Server

The one thing that I personally had the toughest time figuring out when I first got started with Windows Server 2012 was rebooting the server. After all, the Start menu is gone, and so is the shut down option that has always existed on the Start menu.

To power down (or reboot) your server, move your mouse to the upper, right corner of the screen. When you do, Windows will display a series of icons along the right side of the screen. Click the Settings icon and you will be taken to the Settings page, which you can see in Figure A. As you can see in the figure, the bottom row of icons includes a Power button. You can use this icon to shut down or to reboot the server.

Image
Use the Power icon to shut down or reboot the server.

Accessing the Control Panel

Another task that some administrators have struggled with is that of accessing the Control Panel. There are actually several different ways to get to the Control Panel, but I will show you the two most common methods.

The first method is to use the same set of icons that I showed you in the previous step. Move your mouse to the upper, right corner of the screen and then click on Settings. When the Settings page appears, click the Control Panel link, which you can see in Figure A.

Another way to access the Control Panel is to go into Desktop mode and then move your mouse pointer to the lower left corner of the screen. When you do, the Start tile will appear. Right click on this tile and a menu will appear. This menu contains an option to access the Control Panel, as shown in Figure B.

Accessing the Administrative Tools

In Windows Server 2008 and 2008 R2, you could access the administrative tools by clicking the Start button, and then going to All Programs and clicking the Administrative Tools option. Needless to say, since the Start menu no longer exists, you have to access the administrative tools in a new way.

There are a couple of different ways to access the administrative tools in Windows Server 2012. One way involves using the Server Manager. As you can see in Figure C, the Server Manager's Tools menu contains all of the administrative tools that you are probably familiar with from Windows Server 2008.

Image
The administrative tools are accessible from the Server Manager's Tools menu.

Of course it's kind of a pain to have to go into the Server Manager every time that you need to access an administrative tool. It would be a lot easier if the tools were accessible from the Start screen. The good news is that it is easy to make that happen.

To do so, make sure that you are looking at the Windows Start screen. This technique won't work if you are in Desktop mode. Now, move your mouse to the upper right corner of the screen and then click on the Settings icon. When the Settings page appears, click on the Tiles link. As you can see in Figure D, there is a slide bar that you can use to control whether or not the Administrative Tools are shown on the Start screen. You can see the Administrative Tools icon in the lower left corner of the screen capture.

Image
You can use the slide bar to enable the Start screen to display the administrative tools.

Accessing Your Applications

Perhaps one of the most frustrating aspects of the new interface is that applications are no longer bound to a centralized Start menu. This might not be such a big deal if all of your applications happen to have tiles on the Start screen, but what happens if certain tiles are "missing"?

Some administrators have found that after upgrading from a previous version of Windows Server that their Start screen contains only a small subset of the items that previously resided on their server's Start screen. The good news is that these missing items are not lost. You just have to know where to look for them.

To access all of the tiles that the Start screen is hiding, right click on an empty area of the Start screen. When you do, a blue bar will appear at the bottom of the screen, as shown in Figure E. Click on the All Apps icon that appears on this bar. When you do, you will be taken to an Apps screen that's similar to the one shown in Figure F. As you can see in the figure, the apps are categorized in a manner similar to how they might have been on the Start menu.

Image
Right click on an empty area of the Start screen to reveal the blue bar and the All Apps icon.

Image
The Apps screen contains all of the missing tiles.

The Run Prompt and the Command Line

In previous versions of Windows Server, I used the Run prompt and the Command Line extensively. For example, if you needed to access a utility such as the Disk Management Console, the easiest way to get to it was to click on the Run prompt and enter the DISKMGMT.MSC command.

Similarly, I also spent a lot of time in a Command Prompt environment. Sure, PowerShell is the way of the future, but there are some commands that just don't work in a PowerShell environment. Most of the command line utilities will only work from a true command line environment. For example, the ESEUTIL tool that comes with Exchange Server 2010 is designed to be used from a Command Prompt and it doesn't work in PowerShell.

Fortunately, the Run prompt and the Command Prompt are both easily accessible. To reach these items, navigate into Desktop mode. Upon doing so, move your mouse pointer to the lower, left corner of the screen. When the Start tile appears, right click on it and you will see a menu listing options for Run, Command Prompt and Command Prompt (Admin).

Implementing DHCP Server Failover

Much focused on using PowerShell to manage Windows Server 2012, so this show you some of the things you can do as an admin using PowerShell. Note that the target audience of the book is Windows intermediate-level admins who have several years of work experience but who might still be beginners when it comes to using PowerShell, so hoping that readers will find my book useful to learn how they can start using PowerShell to simplify and automate the administration of Windows servers in their environment.

This first excerpt is from Chapter 6 Network Administration and describes the new DHCP Server Failover capability included in the DHCP Server role in Windows Server 2012. I've also included one of the chapter's exercises, which shows how to implement DHCP Server Failover using PowerShell. Note that these book excerpts haven't finished going through the editorial review process yet, so they may change a bit in the published version.

Understanding DHCP failover

DHCP failover is a new approach to ensuring DHCP availability that is included in Windows Server 2012. With this approach, two DHCP servers can be configured to provide leases from the same pool of addresses. The two servers then replicate lease information between them, which enables one server to assume responsibility for providing leases to all clients on the subnet when the other server is unavailable. The result of implementing this approach is to ensure DHCP service availability at all times, which is a key requirement for enterprise networks.

The current implementation of DHCP failover in Windows Server 2012 has the following limitations:

  • It only supports using a maximum of two DHCP servers.
  • The failover relationship is limited to IPv4 scopes and subnets.

DHCP server failover can be implemented in two different configurations:

  • Load sharing mode Leases are issued from both servers equally, which ensures availability and provides load balancing for your DHCP services (this is the default DHCP server failover configuration).
  • Hot standby mode Leases are issued from the primary server until it fails, whereupon the lease data is automatically replicated to the secondary server which assumes the load.

Load sharing mode

A typical scenario for implementing the load sharing approach is when you want to have two DHCP servers at the same physical site. If the site has only a single subnet, then all you need to do is enable DHCP failover in its default configuration. If there are multiple subnets, deploy both DHCP servers in the same subnet, configure your routers as DHCP relay agents (or deploy additional DHCP relay agents in subnets), and enable DHCP server failover in its default configuration.

Hot standby mode

When implementing the hot standby mode approach, you can configure a DHCP server so that it acts as the primary server for one subnet and secondary server for other subnets. One scenario where this approach might be implemented is for organizations that have a central hub site (typically the data center at the head office) connected via WAN links to multiple remote branch office sites. Figure 6-1 shows an example of an organization that has DHCP servers deployed at each branch office and at the head office. Branch office servers are configured to lease addresses to clients at their branch offices, while the central server leases addresses to clients at the head office. Each branch office server has a failover relationship with the central server, with the branch office assuming the role as primary and the central server as secondary. That way, if a DHCP server fails at a branch office, the central server can take up the slack for the remote site. For example, the DHCP server at Branch Office A is the primary server for the scope 10.10.0.0/16 while the DHCP server at the Head Office is the secondary for that scope.


Implementing DHCP failover in hot standby mode in a hub and spoke site scenario.

Exercise 1: Implementing DHCP failover using Windows PowerShell

In this exercise you will ensure DHCP availability for clients in the corp.contoso.com domain by using Windows PowerShell to install the DHCP Server role on both servers, create a scope on SERVER1, and configure and verify DHCP failover.

  1. Log on to SERVER1, open Server Manager, select the All Servers page and make sure that both servers are displayed in the Servers tile. If SERVER2 is not displayed, add it to the server pool.
  2. Open a Windows PowerShell prompt and run the following command to install the DHCP Server role on both servers:
    Invoke-Command -ComputerName SERVER1,SERVER2 -ScriptBlock {Install-WindowsFeature `
    -Name DHCP -IncludeManagementTools -Restart}

    Note that although you specified the -Restart parameter, the servers did not restart after role installation because a restart was determined as being unnecessary.
  3. Authorize both DHCP servers in Active Directory by executing the following commands:
    Add-DhcpServerInDC -DnsName SERVER1
    Add-DhcpServerInDC -DnsName SERVER2
  4. Use the Get-DhcpServerInDC cmdlet to verify that the servers have been authorized in Active Directory.
  5. Create a new scope on SERVER1 and activate the scope by running the following command:
    Add-DhcpServerv4Scope -ComputerName SERVER1 -StartRange 10.10.0.50 `
    -EndRange 10.10.0.100 -Name "corp clients" -SubnetMask 255.255.0.0 -State Active
  6. Use the Get-DhcpServerv4Scope cmdlet to verify that the new scope has been created on SERVER1 and is active.
  7. Use Get-DhcpServerv4Scope -ComputerName SERVER2 to verify that SERVER 2 currently has no scopes on it.
  8. Run the following command to create a DHCP failover relationship in load balance mode between the two servers with SERVER 2 as the partner server and failover implemented for the newly created scope:
    Add-DhcpServerv4Failover -Name "SERVER1 to SERVER2" -ScopeId 10.10.0.0 `
    -PartnerServer SERVER2 -ComputerName SERVER1 -LoadBalancePercent 50 `
    -AutoStateTransition $true
  9. Use the Get-DhcpServerv4Failover cmdlet to view the properties of the new failover relationship.
  10. Use Get-DhcpServerv4Scope -ComputerName SERVER2 to verify that the scope has been replicated from SERVER1 to SERVER2.
  11. Turn on CLIENT1 and log on to the client computer.
  12. Open a command prompt and use the ipconfig command to view the current IP address of the computer. If the client computer is currently using an address in the APIPA range (169.254.x.y) then use ipconfig /renew to acquire an address from a DHCP server on your network. Verify that the address acquired comes from the scope you created earlier.
  13. Verify that the client computer's address is recorded as leased in the DHCP database of SERVER1 by executing the following command:
    Get-DhcpServerv4Lease -ComputerName SERVER1 -ScopeId 10.10.0.0
  14. Verify that the client computer's address is recorded as leased in the DHCP database of SERVER1 by executing the following command:
    Get-DhcpServerv4Lease -ComputerName SERVER1 -ScopeId 10.10.0.0

Deploying Certificate Services in Windows Server 2012 #1

A PKI is built on a foundation of certification authorities, which can be arranged in a hierarchical fashion. Deployment of a Windows Server 2012-based PKI involves installing the AD CS server role on one or more servers that will act as the CA(s). Digital certificates can be used for the purposes mentioned above, and more – but the use of particular certificate types is a topic better suited for our Windowsecurity site. This article series will focus on the server and networking aspect and how to go about deploying certification authorities for your organization.

What's new for Windows Server 2012

Microsoft has steadily improved Certificate Servers over the years and many of those enhancements have been done in an effort to make things easier for the administrators who need to install, configure and manage the PKI. They've continued along those same lines with the changes in Windows Server 2012, giving you more choices than ever before. (Source: Microsoft TechNet Library, What's New in AD CS?)


Expanded flexibility

One of the nicest changes is that you no longer have to struggle with figuring out which Certificate Services role services could run on which editions of the operating system. If you ever tried to deploy Certificate Services in Windows Server 2008 R2, you know that this could be a planning and deployment nightmare for small and medium sized businesses. Whereas all the roles and features were available on the enterprise and datacenter editions, some of the Certificate Services components wouldn't run on the standard edition (and none of them were available on the Web edition). In addition, although the CA role service would run on the standard edition of the OS, certain features (for example, role separation and certificate manager restrictions) were not available on those standard edition CAs, only on enterprise and datacenter CAs. The complete tables showing which editions supported which role services and features can be found in the Active Directory Certificate Services Overview for Windows Server 2008 R2 in the TechNet library.

Administrators will be relieved to find out that all that has changed in Windows Server 2012. Now it's simple: all AD CS role services will run on all editions of Windows Server 2012. Windows Server 2012 comes in four different editions: Datacenter, Enterprise, Essentials and Foundation (the last two have user account limits of 25 and 15 users, respectively, whereas the first two are licensed per-processor with Client Access Licenses required). AD CS includes the same six different role services as in previous versions: Certification Authority, CA Web Enrollment, Online Responder, Network Device Enrollment Service, Certificate Enrollment Policy Web Service, and Certificate Enrollment Web Service. Any of these can now be installed on any Windows Server 2012 edition.

And that includes the Server Core installation of Windows Server 2012, too. In the past, you couldn't run all of the AD CS role services on a server core installation because some required the graphical interface that is not present on a machine running Server Core. Now, you can use Windows PowerShell cmdlets to deploy and manage the AD CS role services. This is great news for those who prefer to run their servers as Server Core installations for better security, more stability and better performance.

More options for managing Certificate Services

If you don't install in Server Core mode, the new Server Manager interface is front and center in Windows Server 2012; it's readily available on the taskbar and it's one of the two primary means for performing administrative tasks (the other, of course, is PowerShell). The Server Manager has been completely redesigned to provide a more streamlined look that's in keeping with the Windows 8 tile-based style and many new functionalities have been integrated into it. For those who prefer to use the graphical tools, Server Manager is the centralized location from which you can configure and manage Active Directory Certificate Services. Of course, Server Manager allows you to manage multiple remote AD CS servers in addition to the local server.

You can benefit from Server Manager even if you're going to run your servers as Server Core machines. That's because a new installation option in Windows Server 2012 lets you install your server with the graphical interface, which you can use for all the initial setup and configuration of your server roles and role services, and then switch over to Server Core afterward.

Even better, if you run into situations where you aren't entirely comfortable using the command line (or you need to perform troubleshooting tasks that aren't possible from the command line), you can switch back to the full graphical interface and then back to Server Core again when you're finished. And it's easy. All it takes is a couple of simple PowerShell cmdlets (Import-Module ServerManager and Uninstall-WindowsFeature Server-Gui-Shell-Restart) and a reboot of the server. You can see screenshots of the steps involved in the process here in this TechNet blog post

Of course, a third option is to install the Minimal Server Interface, which is Server Core but with GUI management.

Version 4 certificate templates

Certificate templates are used in the enterprise environment to define format and content of certificates, the enrollment process (including which users/computers are allowed to enroll for which certificate types), etc. Each template is configured with specific permissions and the templates are available to all the CAs in a forest. The templates make it easier for users to get certificates for the purposes that fit their needs. The user doesn't have to "reinvent the wheel" by submitting a complicated certificate request for a type of certificate that is frequently requested by members of the organization.

The first templates were introduced with Windows 2000 Server but they were very limited because they couldn't be modified. Version 2 templates were introduced with Windows Server 2003 and they allowed for some customization. Version 3 templates were introduced with Windows Server 2008, to support features such as Cryptography Next Generation (CNG).

Windows Server 2012 brings us version 4 certificate templates. These even more sophisticated templates support both cryptographic service providers and key service providers, they can be set to require that the same key be used for renewal, and they specify the minimum CA and client OS that can use the template – they are only available for use with Windows Server 2012 and Windows 8 client computers.

Note:
Some folks are confused by the fact that there is a Compatibility tab that lets you specify minimum down-level operating systems. This does not mean those OSes will be able to use the version 4 templates; it means they can still enroll for certificates through the CA using earlier version templates.

Better support for globalized organizations

More and more organizations that use Windows Server Certificate Services are located in countries across the world. The languages in some of those countries include characters that can't be represented in ASCII. This can cause problems because our Domain Name System (DNS) was originally created in the 1980s in the United States by Boston computer scientist Paul V. Mockapetris who likely never envisioned how communications relying on the system would spread around the globe. Thus he based DNS on the use of the ASCII character set and that limited the characters that could be used in domain names.

The concept of Internationalized Domain Names (IDNs) came along in the late 90s to translate domain names that were written in languages other than ASCII-character-compatible ones to be translated into an ASCII representation.

The problem with this solution is that it only works with applications that are designed to recognize these IDNs. Past iterations of AD CS did not work with IDNs. Windows Server 2012 AD CS brings some (but not full) support for IDNs. What that means is that AD CS can utilize IDNs in some specific scenarios. This is a step in the right direction. The scenarios that are supported include the following:

  • Computers that use IDNs can be enrolled for certificates.
  • You can generate and submit a request for a certificate with an IDN (you need to use the command line utility certreq.exe)
  • Certificate Revocation Lists (CRLs) and Online Certificate Status Protocols (OCSPs) can be published to servers that use IDNs.
  • IDNs are now supported by the Certificate user interface.
  • IDNs can be used in the Certificate Properties in the Certificate management console.

These changes will be useful for those organizations that must use IDNs for compatibility with DNS.

NIC Teaming in Windows Server 2012

Windows Server 2012 has a number of great new features. One of the most welcome new features is the ability to create NIC teams. A NIC team is a collection of network interfaces (NICs) that work together as one. There are many benefits to building a NIC team. The main benefit is bandwidth aggregation. NIC teaming allows the bandwidth of every NIC in the team to be combined, thereby delivering more bandwidth than any single NIC in the team would be able to handle by itself.

Another noteworthy benefit to NIC teaming is redundancy. NIC teaming protects the server against NIC failures. If a NIC within a NIC team fails then the team is able to continue functioning in spite of the failure, but at a reduced capacity.

Technically speaking, NIC teaming isn't an entirely new feature. Previous versions of Windows Server supported NIC teaming, but only with some very significant restrictions. The main restriction was that the NIC team had to be implemented at the hardware level, not the software level. This meant that you had to purchase server hardware and NICs that natively supported NIC teaming. Furthermore, the server and the NICs had to be provided by the same vendor. Needless to say, this approach to NIC teaming was expensive to say the least.

These limitations are gone in Windows Server 2012. Now NIC teaming can be implemented at the software level, so there is no need to purchase specialized server hardware. Furthermore, the NIC team does not need to be vendor consistent. You can create a NIC team consisting of NICs from multiple vendors.

Another benefit is that a NIC team can be huge. You can combine up to 32 physical NICs into a NIC team. Imagine for a moment that you built a team of 32 ten gigabit NICs. That would be the functional equivalent to having a 320 gigabit connection (minus overhead).

NIC Team Uses

Right about now you might be wondering under what circumstances you can use a NIC team. Generally speaking, a NIC team can be used in any situation that a physical NIC would be used in. NIC teams can handle normal server level traffic, but they can also be used by virtual machines. Having said that, there are a few exceptions. NIC teaming does not work with the following:

SR-IOV

Remote Direct Memory Access (RDMA)

TCP Chimney

Microsoft doesn't really explain why TCP Chimney isn't supported (at least not that I have found), but they do indicate that that the reason SR-IOV and RDMA aren't supported is because these technologies send traffic directly to the network adapter and completely bypass the networking stack, which means that SR-IOV and RDMA are unable to detect the NIC team.

Building a NIC Team

Creating a NIC team is an easy process. To do so, open the Server Manager and click on Local Server. Next, locate the NIC Teaming option in the Properties section and then check to see if NIC Teaming is enabled or disabled, as shown in Figure A.


Pic A: Check to see whether NIC Teaming is enabled or disabled.

Assuming that NIC Teaming is disabled, click on the Disabled link and the NIC Teaming window will open, as shown in Figure B.


Pic B: NIC teams are created through the NIC Teaming console.

Now, go to the console's Teams section and click on the Task drop down. Select the New Team option. When you do, you will see the NIC Teaming dialog box, shown in Figure C.


Pic C: Use the NIC Teaming dialog box to create the NIC Team.

As you can see in the figure, the dialog box is pretty simple. You can create a NIC team by entering a name for the team and then picking the network adapters that are included in the team. In the figure above I stuck with the default names for the network adapters that were installed in my server, but if you do rename your network adapters then the custom names that you have assigned will show up in this dialog box.

Before you create the NIC team, it is a good idea to define some additional properties. While this certainly isn't a requirement, doing so gives you more control over the team's functionality. If you look at the figure above, you will notice that there is an Additional Properties drop down near the bottom of the figure. If you click this drop down, you will be presented with some additional options, as shown in Figure D.


Pic D: There are some additional properties that you can configure.

Teaming Mode

The first option on the list is the teaming mode. You can choose from three different teaming modes. The default option is Switch Independent. As the name implies, switch independent mode lets you build a NIC team without having to worry about your network switches. The NICs that make up the team can even be connected to multiple network switches.

The next option is called Static Teaming. Static teaming is a switch dependent mode. This mode requires you to configure both the computer and the network switch so as to identify the links that make up the team.

The third teaming mode is also switch dependent. It is called LACP, and is based on link aggregation. The advantage to using this type of NIC teaming is that you can dynamically reconfigure the NIC team by adding or removing NICs as your needs dictate.

Load Balancing Mode

The next option on the list is load balancing mode. Load balancing mode lets you choose between two options – Address Hash or Hyper-V port. The Address Hash option is usually the best choice because it allows traffic to be load balanced across all of the NICs in the team.

The Hyper-V Port option balances traffic on a per virtual machine basis. This type of load balancing assigns each virtual machine's traffic to a specific NIC. The problem with this approach is that virtual machines are unable to take advantage of distributing traffic across multiple NICs.

Standby Adapter

The last option is Standby Adapter. As the name suggests, the Standby Adapter option lets you designate a NIC as a standby spare. That way, if a NIC in the team were to fail then a spare is on hand to take over. It is worth noting however, that you can only designate one NIC as a standby adapter. Windows does not support having multiple spare adapters.

19 Mar 2013

How to Enable Data Deduplication in Windows 8


I'm running out of disk space in my test environment and how I am already using differencing disk to reduce the amount of space used. That triggered me to think of the deduplication feature to further "optimize" my disk space utilization.

But hold on, that feature is only available on the Windows 2012 server operating system. But thanks to a folk from "My Digital Life" forum I was able to get it to work on my Windows 8 client test computer. Basically what you need to do is copy the files needed for the data dedup feature from Windows Server 2012 to Windows 8. I've uploaded a copy to my SkyDrive so you can alternatively get it from there.

Once you've got it down to your local drive, open up powershell and change the path to where you copied the downloaded file to. Go ahead and run the below two commands. Yes, a reboot might be needed.

dism /online /add-package /packagepath:Microsoft-Windows-VdsInterop-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /packagepath:Microsoft-Windows-VdsInterop-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab /packagepath:Microsoft-Windows-FileServer-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /packagepath:Microsoft-Windows-FileServer-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab /packagepath:Microsoft-Windows-Dedup-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /packagepath:Microsoft-Windows-Dedup-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab

dism /online /enable-feature /featurename:Dedup-Core /all

If all goes well, you'll see this from your control panel which is not available before this.

image

Now, because you're on Windows 8 and not on Windows Server 2012, you can't configure it using the Server manager. So your next best friend here is PowerShell. No biggy! Here's how you can configure Data Deduplication using PowerShell.

These are the few commands I used to get it working. This is to enable the feature on a particular volume.

PS C:\> Enable-DedupVolume D:

This is to start the optimization manually.

PS C:\> Start-DedupJob –Volume D: –Type Optimization

To look at the progress of the dedup job.

PS C:\> Get-DedupJob

image

By default, jobs have a pre-defined schedule. Use the Get-DedupSchedule to look at the schedule.

image

Here's what I had before dedup.

image

And here's what I'm getting after 15 minutes into the progress. Bear in mind it is still mid-way through the process. Not bad ey! Hope you enjoy it!

image

Configuring iSCSI Storage #4

SRV-A is our application server that consumes iSCSI storage. The iSCSI initiator has been enabled on this server, and the initiator IQN is iqn.1991-05.com.microsoft:SRV-A.adatum.com.

SRV-B is our storage server that provides block-based iSCSI storage to the application server. SRV-B has direct-attached storage (DAS) in the form of four SATA drives attached to a hardware RAID adapter that uses a SAS bus connection. The RAID adapter has been configured to expose the drives as individual physical disks, that is, no striping, mirroring, or parity has been configured. One of these drives hosts the operating system, while the other three drives are our data drives.

So far we have performed the following configuration tasks on SRV-B:

  • The iSCSI Target Server role service has been installed on the server.
  • The three data drives have been brought online and formatted as NTFS volumes named X, Y and Z drive.
  • Two iSCSI virtual disks have been created on X: drive. These virtual disks are backed by the VHD files vdisk1.vhd and vdisk2.vhd in the iSCSIVirtualDisks folder on this drive.
  • Two iSCSI targets have been created on the target server. Their target IQNs are iqn.1991-05.com.microsoft:SRV-B-target1-target and iqn.1991-05.com.microsoft:SRV-B-target2-target and the targets are assigned to virtual disks vdisk1.vhd and vdisk2.vhd respectively.

Image
Where we are at in our walkthrough.

We are now going to perform the following tasks:

  1. Configure the iSCSI initiator on SRV-A by discovering available targets in the environment for the initiator to      connect to and establish a connection between the initiator and one of the available targets.
  2. Provision a new iSCSI volume for SRV-A and use it just like you would use a local volume on the server.

We will perform all of the above tasks on our application server SRV-A.

Configuring the iSCSI initiator

We'll begin by opening the iSCSI Initiator Properties dialog by selecting iSCSI Initiator from the Tools menu of Server Manager:

Image
Step 1 of configuring the iSCSI initiator.

The screenshot above shows the Targets tab and at this point no iSCSI targets are configured for the initiator. To discover targets in our environment that the initiator can connect to, switch to the Discover tab:

Image
Step 2 of configuring the iSCSI initiator.

To find iSCSI targets we can connect to from our application server, we first need to find iSCSI target servers which are here called target portals. There are two ways we can do this. First, if we know which servers are portals in our environment, we can click Discover Portal and manually configure the IP address or DNS name of the portal as shown here:

Image
Step 3 of configuring the iSCSI initiator.

Alternatively, if we have an iSNS server in our environment, we can click Add Server in the previous screenshot to specify the IP address or DNS name of the iSNS server. Internet Storage Name Service (iSNS) servers act like DNS servers for an iSCSI infrastructures. Initiators can query them to automatically discover all iSCSI targets present in your infrastructure, making manual configuration of portals unnecessary for your initiators. In a future article we may examine how to implement iSNS in a Windows Server 2012-based iSCSI environment, but for this walkthrough we'll simply manually specify SRV-B as the target portal for the initiator on SRV-A. Once we've done this, the Discovery tab now looks like this:

Image
Step 4 of configuring the iSCSI initiator.

Switching to the Targets tab shows that two targets have been found on the target server. Both of these targets have Inactive as their status because a connection with them has not yet been established by the initiator:

Image
Step 5 of configuring the iSCSI initiator.

Let's now connect to the target named "target1". To do this, we select the first target in the list and click Connect to open the Connect To Target dialog:

Image
Step 6 of configuring the iSCSI initiator.

Note that by default after we connect to the target it will be added to the list of targets on the Favorite Targets tab of the iSCSI Initiator Properties dialog. If a target is listed on the Favorite Targets tab, the initiator will attempt to automatically restore connection to that target whenever the initiator computer is restarted.

Once a connection has been established between the initiator on SRV-A and target1 on SRV-B, the status of target1 on the Targets tab changes from Inactive to Connected:

Image
Step 7 of configuring the iSCSI initiator.

There are also other configuration tasks you can perform on the iSCSI initiator if these are needed. For example, you can use the RADIUS tab to configure Remote Authentication Dial-In User Service (RADIUS) authentication by specifying a RADIUS server in your environment. Unlike CHAP authentication which is peer-based, RADIUS authentication happens between a RADIUS server and a RADIUS client. You can also use the Configuration tab to do things like change the initiator name, configure a CHAP secret, configure IPsec tunnel mode addresses, and generate a report of all connected targets and devices.

You can also use Windows PowerShell to configure an initiator and establish a connection to a target on a target server. For example, let's use Windows PowerShell to connect the initiator on SRV-A to target2 on SRV-B. First, we'll use the Get-IscsiTarget cmdlet to view a list of available targets on SRV-B like this:

PS C:\> Get-IscsiTarget | fl

IsConnected    : True
NodeAddress    : iqn.1991-05.com.microsoft:srv-b-target1-target
PSComputerName :

IsConnected    : False
NodeAddress    : iqn.1991-05.com.microsoft:srv-b-target2-target
PSComputerName :

Note that if the initiator computer has been restarted, the Get-IscsiTarget cmdlet may not return any results unless you refresh the list of targets discovered by the initiator. You can do this either by clicking the Refresh button on the Targets tab of the iSCSI Initiator Properties dialog or by running the Update-IscsiTarget cmdlet on the initiator computer.

The output of the Get-IscsiTarget command shows that the initiator is not yet connected to target2, so let's now use the Connect-IscsiTarget cmdlet to establish this connection:

PS C:\> Connect-IscsiTarget -NodeAddress "iqn.1991-05.com.microsoft:SRV-B-target2-target"

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:srv-a.adatum.com
InitiatorPortalAddress  : 0.0.0.0
InitiatorSideIdentifier : 400001370000
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : False
IsHeaderDigest          : False
IsPersistent            : False
NumberOfConnections     : 1
SessionIdentifier       : fffffa8013fd8430-4000013700000003
TargetNodeAddress       : iqn.1991-05.com.microsoft:srv-b-target2-target
TargetSideIdentifier    : 0200
PSComputerName          :

Note that the value of the IsConnected property in the above command output is True, which means a connection was successfully established between the initiator and target2. We can view all established connections and their properties by using the Get-IscsiConnection cmdlet like this:

PS C:\> Get-IscsiConnection

ConnectionIdentifier : fffffa8013fd8430-0
InitiatorAddress     : 0.0.0.0
InitiatorPortNumber  : 192
TargetAddress        : 172.16.11.220
TargetPortNumber     : 3260
PSComputerName       :

ConnectionIdentifier : fffffa8013fd8430-2
InitiatorAddress     : 0.0.0.0
InitiatorPortNumber  : 3325
TargetAddress        : 172.16.11.220
TargetPortNumber     : 3260
PSComputerName       :

One thing to note at this point is that the output of the previous Get-IscsiTarget command shows that the IsPersistent property of target2 is False. What this means is that this connection won't persist across a restart of the initiator computer. It also means that target2 is not included in the list on the Favorite Targets tab on the iSCSI Initiator Properties dialog. To make the connection to target2 persist across reboots (and to add it to the list of favorite targets for the initiator) we can use the Register-IscsiSession cmdlet together with the value of the SessionIdentifier property from the output of the Connect-IscsiTarget command above:

PS C:\> Register-IscsiSession -SessionIdentifier "fffffa8013fd8430-4000013700000003"

If you now click Refresh on the Favorite Targets tab of the iSCSI Initiator Properties dialog, the connection to target2 should now be displayed.

Provisioning new iSCSI volumes

We're now ready to create a new iSCSI volume on our application server. To do this, we'll start by opening Server Manager on SRV-A (you can also do this from SRV-B if SRV-A has been added to the server pool on SRV-B) and select the Disk page on the File And Storage Services canvas. Note that SRV-A, which in my test lab actually has only one direct-attached disk (the SAS disk shown below) now appears to have three direct-attached disks. The second and third disks however are not directly attached to SRV-A but are actually virtual disks on our iSCSI target server SRV-B. Because the iSCSI initiator on SRV is connected to the two iSCSI targets we created on SRV-B, and because these targets are assigned to two different iSCSI virtual disks on SRV-B, the result is that SRV-A interprets these two virtual disks as if they were directly attached instead of on another server on the network. You can see this in the screenshot by the fact that the Bus Type property of disks 1 and 2 is iSCSI:

Image
Step 1 of provisioning an iSCSI volume.

Both of these iSCSI virtual disks are currently marked as Offline, so let's right-click on disk 1 and select Bring Online from the context menu. Once disk 1 is online, right-click on it a second time and select New Volume:

Image
Step 2 of provisioning an iSCSI volume.

In the New Volume Wizard, select the local server (SRV-A) as the server to which we want to provision the new iSCSI volume:

Image
Step 3 of provisioning an iSCSI volume.

Next we'll specify the size of the iSCSI volume we're creating on SRV-A:

Image
Step 4 of provisioning an iSCSI volume.

We'll assign E: as the drive letter for the new iSCSI volume on SRV-A:

Image
Step 5 of provisioning an iSCSI volume.

We'll format the new volume using NTFs and give it a label of ISCSI_VOL_1 like this:

Image
Step 6 of provisioning an iSCSI volume.

Once the wizard is finished, the new iSCSI volume is displayed on the Disks tile as being present on SRV-A:

Image
The new iSCSI volume has been created.

Selecting the Volumes tile displays additional information about the new volume:

Image
 Another view of the iSCSI volume.

If we now open Windows Explorer on SRV-A, it appears as if SRV-A has two local hard drives C: and E: like this:

Image
The iSCSI volume behaves like a local disk.

The new iSCSI volume on SRV-A behaves just like directly-attached disk even though it is actually located elsewhere on the network. If we copy a file to it, the bits for the file will be transmitted over the network using the iSCSI protocol. And if we read a file from, the bits for the file will again be transmitted over the network using the iSCSI protocol. But from the perspective of applications running on SRV-A, volume E: is just another local volume on the server.