25 Jun 2013

Deduplication and data lifecycle management

While there is no denying that deduplication can be a tremendously helpful tool for backup administrators, there have been certain areas in which deduplication has historically proven to be inefficient.

For example, many deduplication solutions have ignored the fact that some organizations perform backups that are more sophisticated than a basic backup, which merely creates a redundant copy of production data. Larger organizations may use multiple tiers of backup storage, in which data is retained for varying lengths of time. Additionally, business continuity requirements may mean that some of the backed up or archived data may reside off-premises, either in an alternate data center or in the cloud.

To see how the deduplication process works in such a situation, imagine that an organization creates disk-based backups to an on-premises backup server which stores 30 days' worth of backups. Then, the backup server's contents are replicated to an off-site backup appliance and workflows are in place to move aging backup data onto less expensive storage. Let's say 120 days' worth of backups are stored in an off-site data center and that a full two years' worth of backups are stored in the cloud.

Data is deduplicated as it is written to the on-premises backup server. The replica backup server is a mirror of the primary backup server, so deduplicated data can be sent to the replica without the need for rehydration.

However, when data is sent to is the off-site, long-term data repositories things start to get messy. The off-site storage is not a mirror to the backup server, so the backup server's contents cannot simply be replicated. Instead, the data must be rehydrated before it can be sent to the off-site storage. And, because the data is being sent off site, it will likely need to be deduplicated at the source side prior to being sent over the wire. In essence, deduplicated data is being rehydrated only to be deduplicated once again.

Because of these inefficiencies, backup and deduplication vendors have developed various solutions to make the process of moving data to a long-term repository more efficient. CommVault, for example, provides a technology known as Deduplication Accelerated Streaming Hash, or DASH as they like to call it.

DASH makes it possible to move deduplicated data across storage tiers without the need for rehydration. DASH provides the means for creating deduplication-aware secondary copies of your data. The copy process can be either disk-read optimized or network-read optimized.

For a disk-read optimized copy, signatures are read from the source disk's metadata and then sent to the destination media agent, which compares the signatures to the signatures on the destination deduplication database.

If the signature already exists in the destination database, then the data also exists in the destination, so there is no need to transmit the data again. Instead, only the signature references are transmitted. If, on the other hand, no signature is found, then the data is presumed to be new and so the new data is transmitted to the destination, and the destination database is updated.

FalconStor takes a somewhat similar approach with its File Interface Deduplication System (FDS) solution. The product makes use of a global deduplication repository, which makes WAN-optimized replication to remote datacenters possible.

Network-read optimized copy operations are ideal for low bandwidth environments, but they are more I/O intensive than disk read optimized copy operations. The process works similarly to a disk read optimized copy, except that a disk read optimized copy reads signatures from the primary disk's metadata. A network optimized copy operation unravels the data on the primary disk and generates signatures to send to the destination media agent for comparison.
Conclusion

Historically, deduplication has not worked very well in environments in which workflows move data among storage tiers according to retention requirements. The process of moving data typically requires the data to be rehydrated prior to the move operation. New technologies such as CommVault's DASH make it possible to copy deduplicated data to a secondary storage tier without first rehydrating the data.

Understanding data deduplication for primary storage

What you'll learn: This tip discusses data deduplication for primary storage. It examines the data selection criteria you should use when deciding to apply deduplication technology to your primary storage, whether or not to use inline deduplication or post-processing deduplication, and the impact dedupe can have on your data storage environment.

Data deduplication has been a hot topic and a fairly common practice in disk-based backups and archives. Users' initial wariness seems to have given way to adoption, and a deeper focus on the technology has opened up more ways to leverage the benefits of deduplication. The next frontier for deduplication is in the realm of primary storage.

What is primary storage?

Primary storage consists of disk drives (or flash drives) on a centralized storage-area network (SAN) or network-attached storage (NAS) array, where the data used to conduct business on a daily basis is stored. This includes structured data such as databases, as well as unstructured data such as email data, file server data and most file-type application data. It's important to understand this difference because not all data is suitable for primary storage deduplication.

Types of data deduplication

There two main types of data deduplication: inline and post-process. Inline deduplication identifies duplicate blocks as they're written to disk. Post-process deduplication deduplicates data after it has been written to disk. Inline deduplication is considered more efficient in terms of overall storage requirements because non-unique or duplicate blocks are eliminated before they're written to disk. Because duplicate blocks are eliminated, you don't need to allocate enough storage to write the entire data set for later deduplication. However, inline deduplication requires more processing power because it happens "on the fly"; this can potentially affect storage performance, which is a very important consideration when implementing deduplication on primary storage. On the other hand, post-process deduplication doesn't have an immediate impact on storage performance because deduplication can be scheduled to take place after the data is written. However, unlike inline dedupe, post-process deduplication requires the allocation of sufficient data storage to hold an entire data set before it's reduced via deduplication.

Criteria for selecting data for deduplication on primary storage

How do you determine which primary data is a good fit for deduplication? This is where the difference between structured and unstructured data comes into play. A database can be a significantly large file, subject to frequent and random reads or writes. For that reason, the majority of this data can be considered active. That means any processing overhead associated with deduplication could significantly impact I/O performance. In comparison, if we examine data on a file server, we quickly see that only a small portion of files are written to more than once and usually only for a short period of time after they were created. That means a very large portion of unstructured data is rarely accessed, making it a prime candidate for deduplication. This allows rules to be set to deduplicate data based on a "last access" time stamp. Shared storage for virtual servers or desktop environments also presents good opportunities for deduplication because many operating system files aren't unique.

Other data selection criteria include format and data retention. Encrypted data, and some imaging or streaming video files, tend to yield poor deduplication results because of their random nature. In addition, data must reside in storage for some time to generate enough duplicate blocks to make deduplication worth the effort. Transient data that's only staged to primary for a short period -- such as message queuing systems or temporary log files -- should be excluded. And while archived data yields the best deduplication ratios, that type of data isn't suitable for our primary storage discussion.

Inline vs. post-processing deduplication

Let's say you've excluded encrypted data, streaming video and transient data, and you've established rules to determine "last access" and retention. You've identified primary data storage that's a good fit for deduplication. This is when you'll have to choose between inline or post-process deduplication. The ability to deduplicate files once they've been inactive, or not accessed for some time, would favor post-process deduplication over inline because only selected data can be processed at a later time based on specific criteria and after it has been written to disk. Remember, this contrasts with inline deduplication, which would process all data as it's written and may impact performance of certain types of data. Although inline deduplication processes all data immediately, it doesn't always make it a poor choice for implementation on primary storage. It just means that storage tiering -- determining where you need the best performance -- is a crucial first step before deciding to apply deduplication technology to primary storage.

Not all data is right for your primary storage

Data that requires frequent access with optimum write performance won't be a good fit for data deduplication. Data that's difficult to deduplicate due to its format can be stored on a no-deduplication, lower performance disk array to keep costs down. The remaining unstructured data that doesn't require frequent or high-performance access (such as application or user file data) can be stored on a deduplication-enabled primary storage array.

How Windows Server 2012 R2 will change storage management

Microsoft Corp. revealed its forthcoming Windows Server 2012 R2 at this month's TechEd conference. Although a preview build won't be available for download for a few weeks, Microsoft has given us a tantalizing glimpse into what we can expect from the new operating system.

As you probably know, the original Windows Server (WS) 2012 release was jam-packed with hundreds of new features, the bulk of which were related to storage and virtualization. Microsoft's Windows Server 2012 R2 continues this trend by focusing on improvements to the storage subsystem and server virtualization.

Native tiered storage feature

With regard to storage, the biggest improvement in WS 2012 R2 is the inclusion of a native tiered storage feature that's an enhancement to Windows Storage Spaces. The idea behind this feature is that if an administrator adds both solid-state drives (SSDs) and hard disk drives (HDDs) to a storage space, the storage space engine will automatically differentiate between the two types of storage. In doing so, Windows will move hot blocks (blocks of storage that are read more frequently than other storage blocks) to SSD storage, while cooler blocks will remain on HDD storage. Remember, there's a very distinct trade-off between SSDs and HDDs. While SSDs are very fast, they're expensive and have comparatively low capacities. On the other hand, HDDs have plenty of capacity and are relatively inexpensive, but don't offer the performance of SSDs.

The storage tiering feature should greatly boost read performance on Windows servers. This will be especially true for servers acting as virtual desktop infrastructure (VDI) hosts, since VDI images are often identical to one another and share many common storage blocks when deduplicated.

The nice thing about storage tiering is that it will be simple to implement. Although this could potentially change by the time Windows Server 2012 R2 is released, the current build provides the option to create a storage tier within the New Virtual Disk Wizard. When an administrator creates a virtual disk, he can select a checkbox to enable storage tiering for the virtual disk. If storage tiering is enabled, the movement of hot and cold blocks between SSD and HDD storage becomes automatic. Furthermore, the option to tier or not to tier storage for individual virtual hard disks (VHDs) means you won't waste valuable SSD resources on VHDs for which performance isn't a high priority. Microsoft even provides an option to specify the size of each tier on a per-virtual disk basis.

Write-back caching

WS 2012 R2 can also use SSDs for persistent write-back caching. The idea here is that write operations can be cached to an SSD and then later written to HDD storage. This can help to improve overall storage performance.

Fault tolerance

Microsoft is offering some new fault-tolerant options. A dual parity option now creates a logical disk structure that is similar to RAID 6. There's also an option in the New Virtual Disk Wizard to create a three-way mirror.

Data deduplication

One of the big draws of Windows Server 2012 was native deduplication that could be applied on a per-volume basis without the use of third-party software. However, Microsoft placed a number of restrictions on the types of volumes that could be deduplicated. More importantly, Microsoft stated that Hyper-V hosts and VDI virtual hard disks weren't good candidates for deduplication.

WS 2012 R2 supports running virtual machines (VMs) on deduplicated storage. It's common for VMs to share a common set of storage blocks and, because deduplication tracks the location of storage blocks, VM performance generally improves when the underlying storage is deduplicated. For example, in a demo shown during the opening keynote at TechEd, five VMs were booted from non-deduplicated storage while an identical set of VMs was booted from deduplicated storage. The VMs booting from deduplicated storage booted in less than half the amount of time than their counterparts.

As you can see, Windows Server 2012 R2 offers some promising enhancements to storage management. Features such as storage tiering and new fault-tolerant mechanisms should provide SAN-like capabilities to organizations that don't have a SAN.

10 Jun 2013

When to use a PowerShell workflow

When should I use PowerShell Workflow?

Workflow is one of the new heavily advertised features in Windows PowerShell 3.0; it's available on Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows 8 and Windows Server 2012. Despite this, there's a lot of confusion about what it is, what it can do and when you should use it.

A PowerShell workflow is similar to an enhanced PowerShell function. You put commands into it and tell PowerShell to run it. It's "enhanced" in that it supports a few features not found elsewhere in PowerShell, such as the ability to run multiple tasks in parallel, right within the script code. It also lacks a few features found elsewhere in PowerShell, such as support for the Switch construct.

Those additional features come from the fact that a PowerShell workflow isn't actually run inside PowerShell. Instead, it's converted to Windows Workflow Foundation (WWF), a part of the .NET Framework since v3.5, which actually runs the converted code.

And although you're using PowerShell syntax, you have to follow WWF rules, so there's a steep learning curve. Options such as using variables, what commands you can use and how data passes from command to command all change a bit.

But the learning curve can be worth it. PowerShell workflows get built-in abilities to target multiple remote machines in parallel, provided those machines have PowerShell installed and the Remoting feature is enabled, something that's only a default on Windows Server 2012.

Workflows also have several built-in parameters that can do cool things. Workflows can be interrupted and resumed, accommodating power outages, network hiccups and other temporary failures.

Workflows aren't the only way to achieve these things. It takes little extra work to send a script to multiple remote machines in parallel; the Invoke-Command can do so quite well. By sticking with a "normal" PowerShell script, you avoid the need to learn all of WWF's rules and regulations.

The only thing truly unique to a PowerShell workflow is its ability to interrupt and resume -- and there are some rules and caveats around that ability. In some cases, the way in which you write a workflow in PowerShell may not even permit any interrupt/resume capability.

Microsoft FIM (Microsoft Forefront Identity Manager)

Microsoft Forefront Identity Manager (FIM) is a self-service identity management software suite for managing identities, credentials, and role-based access control policies across heterogeneous computing environments. 

FIM embeds self-help tools in Microsoft Outlook so end users can manage routine aspects of identity and access such as resetting their own passwords without requiring help desk assistance. FIM also alllows end users to create their own security and email distribution lists and decide who to include in those lists. 

IT administrators can use FIM to manage digital certificates and smart cards. FIM also provides administrative and automation tools.

The most recent version of Microsoft FIM, FIM 2010 R2, included improvements to diagnostics, reporting and performance. The Service Pack 1 for FIM 2010 R2 offers support Windows 8, Outlook 2013 and Windows Server 2012.

iSCSI target (Internet Small Computer System Interface target)

1. An iSCSI target is a storage resource located on an Internet Small Computer System Interface (iSCSI) server. iSCSI is a protocol used to link data storage devices over an IP network infrastructure.  (Also see: IP storage.)

2. Microsoft ISCSI target is a role in Windows Server 2012 that can turn a computer running Windows Server into a storage device capable of providing shared block storage in the form of virtual hard disks (VHDs) to clients across a TCP/IP network. The device being accessed is called the Target and and the server (or client) accessing the Target is called the Initiator. 

Microsoft iSCSI target can be used to perform a variety of storage-related tasks, including providing shared storage for Microsoft Hyper-V, consolidating storage for multiple application servers, providing shared storage for applications hosted on a Windows failover cluster and enabling diskless computers to boot remotely from a single operating system (OS) image.

Windows Server 2012 R2, System Center 2012 R2 support hybrid clouds

New versions of Windows Server 2012 and System Center 2012 include hybrid cloud capabilities that give IT administrators ways to move workloads between private and public clouds.

Microsoft also added support for multi-device management and the company changed part of its Azure billing.

Windows Server 2012 R2, entering preview this month, includes features for enterprise hybrid clouds through software-defined networking (SDN), storage enhancements and virtual machine portability. The new products were demoed here at TechEd 2013.

This allows customers to create a hybrid cloud to run some infrastructure in their own environment, some of it in the public cloud or some of it with a service provider, said Eron Kelly, a Microsoft spokesperson.

Microsoft also introduced Hyper-V Network Virtualization, a new technology that uses SDN to define how a network can reach across private and service provider data centers.

"[A] network no longer just has to live behind my firewall on-premises and in my data center that network can now be defined to span both on-premises environment and the Azure public cloud," Kelly said.

The SDN functionality opens up the possibility for enterprises to develop apps that work with both public and private cloud scenarios, Kelly said. For example, some customers' apps can "burst into the cloud" when running high-volume tasks.

Beyond virtual networking improvements, there are storage enhancements. Windows Server 2012 R2 will enable admins to manage the storage infrastructure -- SSDs or traditional hard drives -- used for different data sets.

For example, Windows Server 2012 R2 will manage certain application scenarios that demand higher performance and that workload can be designated to an SSD, Kelly said.

The company also made virtual machine performance and portability improvements. Windows Server 2012 R2 will make it easier to set up a virtual machine test and development environment in Azure and pull it down to run on-premises.

In addition, the process for moving VMs from on-premises to other third-party cloud service providers has been simplified.

"I think there will be a demand for that, actually," said Mike Drips, an information architect at CSC in Houston, Texas. "In the past it has been painful to do with Azure. I think this is something corporations have wanted for some time but it had taken Microsoft quite a while to deliver it."

Windows Server 2012 R2 signals a change hinted at earlier this year at a faster release strategy, codenamed "Blue," alongside client counterpart Windows 8.1.
Cross-platform device management with System Center 2012 R2

As more enterprises allow employees to use their own devices, Microsoft will deliver device management to the latest System Center version. In short: System Center 2012 R2 will play better with iOS and Android.

Now IT can manage devices using a single management interface and a common reporting infrastructure, IT can better control which corporate apps they want to publish and to which users and to which devices based on where they are, Kelly said.

Single sign-on will also be supported for "key" enterprise apps, Kelly said.
Microsoft details Azure BizTalk, pricing changes

Microsoft revamped its pricing structure for VMs and worker roles, switching to a per-minute billing from a per-hour billing.

This, Kelly said, is a cost benefit for Azure users.

"Rather than rounding up to the nearest hour, we are rounding up to the nearest minute -- which we think is new in the industry and it gives customers better value," Kelly said.

TechEd attendees also saw a preview of Azure BizTalk, which will allow them to perform EDI processing in the cloud.

3 Jun 2013

Monitoring and Troubleshooting the DHCP Server

You can use the Event Viewer tool, located in the Administrative Tools folder, to monitor DHCP activity. Event Viewer stores events that are logged in the system log, application log, and security log. The system log contains events that are associated with the operating system. The application log stores events that pertain to applications running on the computer. Events that are associated with auditing activities are logged in the security log. All events that are DHCP-specific are logged in the System log. The DHCP system event log contains events that are associated with activities of the DHCP service and DHCP server, such as when the DHCP server started and stopped, when DHCP leases are close to being depleted, and when the DHCP database is corrupt.

A few DHCP system event log IDs are listed below:

    Event ID 1037 (Information): Indicates that the DHCP server has begun to clean up the DHCP database.
    Event ID 1038 (Information): Indicates that the DHCP server cleaned up the DHCP database for unicast addresses:

        0 IP address leases were recovered.
        0 records were deleted.

    Event ID 1039 (Information): Indicates that the DHCP server cleaned up the DHCP database for multicast addresses:

    0 IP address leases were recovered.
        0 records were deleted.
    Event ID 1044 (Information): Indicates that the DHCP server has concluded that it is authorized to start, and is currently servicing DHCP client requests for IP addresses.
    Event ID 1042 (Warning): Indicates that the DHCP service running on the server has detected the following servers on the network.
    Event ID 1056 (Warning): Indicates that the DHCP service has determined that it is running on a domain controller, and no credentials are configured for DDNS registrations.
    Event ID 1046 (Error): Indicates that the DHCP service running on the server has determined that it is not authorized to start to service DHCP clients.

Using System Monitor to Monitor DHCP Activity

The System Monitor utility is the main tool for monitoring system performance. System Monitor can track various processes on the Windows system in real time. The utility uses a graphical display that you can use to view current data, or log data. You can specify specific elements or components that should be tracked on the local computer and remote computers. You can determine resource usage by monitoring trends. System Monitor can be displayed in a graph, histogram, or report format. System Monitor uses objects, counters and instances to monitor the system

System Monitor is a valuable tool when you need to monitor and troubleshooting DHCP traffic being passed between the DHCP server and DHCP clients. Through System Monitor, you can set counters to monitor:

    The DHCP lease process.

    The DHCP queue length

    Duplicate IP address discards

    DHCP server-side conflict attempts

To start System Monitor,

    Click Start, Administrative Tools, and then click Performance.

    When the Performance console opens, open System Monitor

The DHCP performance counters that you can monitor to track DHCP traffic are:

    Acks/sec indicates the rate at which DHCPACK messages are sent by the DHCP server.
    Active Queue Length indicates how many packets are in the DHCP queue for processing by the DHCP server.
    Conflict Check Queue Length indicates how many packets are in the DHCP queue that are waiting for conflict detection.
    Declines/sec indicates the rate at which the DHCP server receives DHCPDECLINE messages.
    Discovers/sec indicates the rate at which the DHCP server receives DHCPDISCOVER messages.
    Duplicaed Dropped/sec indicates the rate at which duplicated packets are being received by the DHCP server.
    Informs/sec indicates the rate at which the DHCP server receives DHCPINFORM messages.
    Milliseconds per packet (Avg.) indicates the average time which the DHCP server takes to send a response.
    Nacks/sec indicates the rate at which DHCPNACK messages are sent by the DHCP server.
    Packets Expired/sec indicates the rate at which packets are expired while waiting in the DHCP server queue.
    Packets Received/sec indicates the rate that the DHCP server is receiving packets.
    Releases/sec indicates the rate at which DHCPRELEASE messages are received by the DHCP server.
    Requests/sec indicates the rate at which DHCPREQUEST messages are received by the DHCP server.

Using Network Monitor to Monitor DHCP Lease Traffic

You can use Network Monitor to monitor network traffic, and to troubleshoot network issues or problems. Network Monitor shipped with Windows Server 2003 allow you to monitor network activity and use the gathered information to manage and optimize traffic, identify unnecessary protocols, and to detect problems with network applications and services. In order to capture frames, you have to install the Network Monitor application and the Network Monitor driver on the server where you are going to run Network Monitor. The Network Monitor driver makes it possible for Network Monitor to receive frames from the network adapter.

The two versions of Network Monitor are:

    The Network Monitor version included with Windows Server 2003: With this version of Network Monitor, you can monitor network activity only on the local computer running Network Monitor.
    The Network Monitor version (full) included with Microsoft Systems Management Server (SMS): With this version, you can monitor network activity on all devices on a network segment. You can capture frames from a remote computer, resolve device names to MAC addresses, and determine the user and protocol that is consuming the most bandwidth.

Because of these features, you can use Network Monitor to monitor and troubleshoot DHCP lease traffic. You can use the Network Monitor version included in Windows Server 2003 to capture and analyze the traffic being received by the DHCP server. Before you can use Network Monitor to monitor DHCP lease traffic, you first have to install it. The Network Monitor driver is automatically installed when you install Network Monitor.

How to install Network Monitor

    Click Start, and then click Control Panel
    Click Add Or Remove Programs to open the Add Or Remove programs dialog box.
    Click Add/Remove Windows Components.
    Select Management and Monitoring Tools and click the Details button.
    On the Management and Monitoring Tools dialog box, select the Network Monitor Tools checkbox and click OK.
    Click Next when you are returned to the Windows Components Wizard.
    If prompted during the installation process for additional files, place the Windows Server 2003 CD-ROM into the CD-ROM drive.
    Click Finish on the Completing the Windows Components Wizard page.

Capture filters disregard frames that you do not want to capture before they are stored in the capture buffer. When you create a capture filter, you define settings that can be used to detect the frames that you do want to capture. You can design capture filters in the Capture Window to only capture specific DHCP traffic, by selecting Filter from the Capture menu. You can also create a display filter after you have captured data. A display filter enables you to decide what is displayed.

How to start a capture of DHCP lease traffic in Network Monitor

    Open Network Monitor.
    Use the Tools menu to click Capture, and then click Start.
    If you want to examine captured data during he capture, select Stop And View from the Capture menu.

Understanding DHCP Server log Files

DHCP server log files are comma-delimited text files. Each log entry represents one line of text. Through DHCP logging, you can log many different events. A few of these events are listed below:

    DHCP server events
    DHCP client events
    DHCP leasing

    DHCP rogue server detection events

    Active Directory authorization

The DHCP server log file format is depicted below. Each log file entry has the fields listed below, and in this particular order as well:

    ID: This is the DHCP server event ID code. Event codes are used to describe information on the activity which is being logged.
    Date: The date when the particular log file entry was logged on your DHCP server.
    Time: The time when the particular log file entry was logged on your DHCP server.
    Description: This is a description of the particular DHCP server event.
    IP Address: This is the IP address of the DHCP client.
    Host Name: This is the host name of the DHCP client.

    MAC Address: This is the MAC address used by the DHCP client's network adapter.

DHCP server log files use reserved event ID codes. These event ID codes describe information on the activities being logged. The actual log file only describes event ID codes which are lower than 50.

A few common DHCP server log event ID codes are listed below:

    00 indicates the log was started.
    01 indicates the log was stopped.
    02 indicates the log was temporarily paused due to low disk space.
    10 indicates a new IP address was leased to a client.
    11 indicates a lease was renewed by a client.
    12 indicates a lease was released by a client
    13 indicates an IP address was detected to be in use on the network.
    14 indicates a lease request could not be satisfied due to the scope's address pool being exhausted.
    15 indicates a lease was denied.
    16 indicates a lease was deleted
    17 indicates a lease was expired
    20 indicates a BootP address was leased to a client.
    21 indicates a dynamic BOOTP address was leased to a client.
    22 indicates a BOOTP request could not be satisfied due to the address pool of the scope for BOOTP being exhausted.
    23 indicates a BOOTP IP address was deleted after confirming it was not being used.
    24 indicates an IP address cleanup operation has started.
    25 indicates IP address cleanup statistics.
    30 indicates a DNS update request.
    31 indicates DNS update failed.
    32 indicates DNS update successful.

The following DHCP server log event ID codes are not described in the DHCP log file. TheseDHCP server log event ID codes relate to the DHCP server's Active Directory authorization status:

    50 – Unreachable domain: The DHCP server could not locate the applicable domain for its Active Directory installation.
    51 – Authorization succeeded: The DHCP server was authorized to start on the network.
    52 – Upgraded to a Windows Server 2003 operating system: The DHCP server was recently upgraded to a Windows Server 2003 OS, therefore, the unauthorized DHCP server detection feature (used to determine whether the server has been authorized in Active Directory) was disabled.
    53 – Cached authorization: The DHCP server was authorized to start using previously cached information. Active Directory was not visible at the time the server was started on the network.
    54 – Authorization failed: The DHCP server was not authorized to start on the network. When this even occurs, it is likely followed by the server being stopped.
    55 – Authorization (servicing): The DHCP server was successfully authorized to start on the network
    56 – Authorization failure: The DHCP server was not authorized to start on the network and was shut down by Windows Server 2003 OS. You must first authorize the server in the directory before starting it again.
    57 – Server found in domain: Another DHCP server exists and is authorized for service in the same Active Directory domain.
    58 – Server could not find domain: The DHCP server could not locate the specified Active Directory domain.
    59 – Network failure: A network-related failure prevented the server from determining if it is authorized.
    60 – No DC is DS enabled: No Active Directory DC was located. For detecting whether the server is authorized, a domain controller that is enabled for Active Directory is needed
    61 – Server found that belongs to DS domain: Another DHCP server that belongs to the Active Directory domain was found on the network.
    62 – Another server found: Another DHCP server was found on the network.
    63 – Restarting rogue detection: The DHCP server is trying once more to determine whether it is authorized to start and provide service on the network.
    64 – No DHCP enabled interfaces: The DHCP server has its service bindings or network connections configured so that it is not enabled to provide service.

How to change DHCP log files location

    Open the DHCP console.
    Right-click the DHCP server node and select Properties from the shortcut menu.
    The DHCP Server Properties dialog box opens.
    Click the Advanced tab.
    Change the audit log file location in the Audit Log File Path text box.
    Click OK.

How to disable DHCP logging

    Open the DHCP console.
    Right-click the DHCP server node and select Properties from the shortcut menu.
    The DHCP Server Properties dialog box opens.
    On the General tab, clear the Enable DHCP Audit Logging checkbox to disable DHCP server logging.
    Click OK.

Troubleshooting the DHCP Client Configuration

A DHCP failure usually exists when the following events occur:

    A DHCP client cannot contact the DHCP server
    A DHCP client loses connectivity.

When these events occur, one of the first tasks you need to perform is to determine whether the connectivity issues occurred because of the actual DHCP client configuration, or whether it occurred because of some other network issue. You do this by determining the address type of the IP address of the DHCP client.

To determine the address type,

    Use the Ipconfig command to determine if the client received an IP addresses lease from the DHCP server.
    The client received an IP address from the DHCP server if the Ipconfig /all output displays:

        The DHCP server as being enabled

   The IP address is displayed as IP Address. It should not be displayed as Autoconfiguration IP Address.
    You can also use the status dialog box for the network connection to determine the IP address type for the client.
    To view this information, double-click the appropriate network connection in the Network Connections dialog box.
    Click the Support tab.
    The IP address type should be displayed as being Assigned By DHCP.

If after the above checks, you can conclude that the IP address was assigned to the client by the DHCP server, some other network issue is the cause of the DHCP server connectivity issues being experienced. The issue is not due to an IP addressing issue on the client.

When clients have the incorrect IP address, it was probably due o the computer not being able to contact the DHCP server. When this occurs, the computer assigns its own IP address through Automatic Private IP Addressing (APIPA).

Computers could be unable to contact the DHCP server for a number of reasons:

    A problem might exist with the hardware or software of the DHCP server.
    A data-link protocol issue could be preventing the computer from communicating with the network.

    The DHCP server and the client are on different LANs and there is no DHCP Relay Agent. A DHCP Relay Agent enables a DHCP server to handle IP address requests of clients that are located on a different LAN.

When a DHCP client is assigned an IP address that is currently being used by another client, then an address conflict has occurred.

The process that occurs to detect duplicate IP addresses is illustrated below:

    When the computer starts, the system checks for any duplicate IP addresses.
    The TCP/IP protocol stack is disabled on the computer when the system detects duplicate IP addresses.
    An error message is shown that indicates the hardware address of the other system that this computer is in conflict with.
    The computer that initially owned the duplicate IP address experiences no interruptions, and operates as normally.
    You have to reconfigure the conflicting computer with a unique IP address so that the TCP/IP protocol stack can be enabled on that particular computer again.

When address conflicts exist, a warning message is displayed:

    A warning is displayed in the system tray
    A warning message is displayed in the System log, which you can view in Event Viewer.
Addresses conflicts usually occur under the following circumstances:

    You have competing DHCP servers in your environment: You can use the Dhcploc.exe utility to locate any rogue DHCP servers. The Dhcploc.exe utility is included with the Windows Support Tools. To solve the competing DHCP server issue, you have to locate the rogue DHCP servers, remove the necessary rogue DHCP servers, and then check that no two DHCP servers can allocate IP address leases from the same IP address range.

    A scope redeployment has occurred: You can recover from a scope redeployment through the following strategy:

        Increase the conflict attempts on the DHCP server.
        Renew your DHCP client leases

    One of the following methods can be used to renew your DHCP client leases:

    Use the Ipconfig /renew command

    The Repair button of the status dialog box (Support tab) of the connection can be used to renew the DHCP client lease.
    When you click the Repair button of the status dialog box (Support tab) of the connection to renew the DHCP client lease, the following process occurs:

    A DHCPREQUEST message is broadcast on the network to renew your DHCP clients' IP address leases.

    The ARP cache is flushed.
    The NetBIOS cache is flushed.
    The DNS cache is flushed.
    The NetBIOS name and IP address of the client is registered again with the WINS server.
    The computer name and IP address of the client is registered again with the DNS server.

You can enable server-side conflict detection through the following process

    Open the DHCP console
    Right-click the DHCP server in the console tree, and select Properties from the shortcut menu.
    When the Server Properties dialog box opens, click the Advanced tab.
    Set the number of times that the DHCP server should run conflict detection prior to it leasing an IP address to a client.
    Click OK.

A few troubleshooting strtegies which you can use when a DHCP client cannot obtain an IP address from the DHCP server, are summarized below:

    Use the Ipconfig /renew command or the Repair button of the status dialog box (Support tab) of the connection to refresh the IP configuration of the client.
    Following the above, verify that the DHCP server is enabled, and that a configured DHCP Relay Agent exists in the broadcast range.
    If the client still cannot obtain an IP address from the DHCP server, check that the actual physical connection to the DHCP server, or DHCP Relay Agent is operating correctly and is not broken.
    Verify the status of the DHCP server and DHCP Relay Agent.
    If the issue still persists after all the above checks have been performed, you might have an issue at the DHCP server or a scope issue might exist.
    When troubleshooting the DHCP server:

        Check that the DHCP server is installed and enabled.
        Check that the DHCP server is correctly configured
        Verify that the DHCP server is authorized.

   When troubleshooting the scope configured for the DHCP server:
        heck that the scope is enabled.
        heck whether all the available IP leases have already been assigned to clients

A few troubleshooting strategies which you can use when a DHCP client obtains an IP address from the incorrect scope are summarized below:

    First determine whether competing DHCP servers exist on your network. Use the Dhcploc.exe utility, included with the Windows Support Tools to locate rogue DHCP servers that are allocating IP addresses to clients.
    If no rogue DHCP servers are located through the Dhcploc.exe utility, your next step is to verify that each DHCP server is allocating IP address leases from unique scopes. There should be no overlapping of the address space.
    If you have multiple scopes on your DHCP server, and the DHCP server is assigning IP addresses to clients on remote subnets, verify that a DHCP Relay Agent that is used to enable communication with the DHCP server has the correct address

Troubleshooting the DHCP Server Configuration

If you have clients that cannot obtain IP addresses from the DHCP server, even though they can contact the DHCP server, verify the following:

    Verify that the DHCP Server service is running on the particular server.
    Check the actual TCP/IP configuration settings on the DHCP server.
    If you are using the Active Directory directory service, verify that the DHCP server is authorized.
    The DHCP server could be configured with the incorrect scope. Check that the scope is correct on the DHCP server, and verify that it is active.

When you need to verify the configuration of the DHCP server, use the following process:

    First check that the DHCP server is configured with the correct IP address. The network ID of the address being used must be the same for the subnet for which the DHCP server is expected to assign IP addresses to client.
    Verify the network bindings of the DHCP server. The DHCP server must be bound to the particular subnet. To check this,

        Open the DHCP console

        Right-click the DHCP server in the console tree, and select Properties from the shortcut menu.
        When the Server Properties dialog box opens, click the Advanced tab.
        Click the Bindings button.
    Check that the DHCP server is authorized in Active Directory. You have to authorize the DHCP server in Active Directory so that it can provide IP addresses to your DHCP clients. To authorize the DHCP server:
        Open the DHCP console.
        In the console tree, expand the DHCP server node.
        Click the DHCP server that you want to authorize.
        Click the action menu, and then select Authorize.
    Verify the scope configuration associated with the DHCP server:

    Check that the scope is activated. To activate a scope,

        Open the DHCP console

        Right-click the scope in the console tree, and select Activate from the shortcut menu.
    Verify that the scope is configured with the correct IP address range.
    Verify that there are available IP address leases which can be assigned to your DHCP clients.
    Verify the exclusions which are specified in the address pool. Confirm that all exclusions are valid and necessary. You need to verify that no IP addresses are being unnecessarily excluded.
    Verify the reservations which are specified. If you have a client that cannot obtain a reserved IP address, check whether the same address is also defined as an exclusion in the address pool. All reserved IP addresses must fall within the address range of the scope. Check too that the MAC addresses were successfully registered for all IP addresses that are reserved
    If you have DHCP servers that contain multiple scopes, check that each of these scopes is configured correctly.

Troubleshooting DHCP Database Issues

The DHCP service uses a number of database files to maintain DHCP-specific data or information on IP addresses leases, scopes, superscopes, and DHCP options. The DHCP database files that are located in the systemrootSystem32DHCP folder are listed below. These files remain open while the DHCP service is running on the server. You should therefore not change any of these files while the DHCP service is running.

    Dhcp.mdb: This is considered the main DHCP database file because it contains all scope information.
    Dhcp.tmp: This file contains a backup copy of the database file which was created during re-indexing of the DHCP database.
    J50.log: This log file contains changes prior to it being written to the DHCP database.
    J50.chk: This checkpoint file informs DHCP on those log files that still have to be recovered.

If you need to change the role of the DHCP server, and move its functions to another server, it is recommended that you migrate the DHCP database to the new DHCP server. This strategy prevents errors that occur when you manually attempt to recreate information in the DHCP database of the destination DHCP server.

To migrate an existing DHCP database to a new DHCP server,

    Open the DHCP console.
    Right-click the DHCP server whose database you want to move to a different server, and select Backup from the shortcut menu.
    When the Browse For Folder dialog box opens, select the folder to which the DHCP database should be backed up. Click OK.

    To prevent the DHCP server from allocating new IP addresses to clients once the DHCP server database is backed up, you have to stop the DHCP server.

    Open the Services console.
    Double-click the DHCP server.
    When the DHCP Server Properties dialog box opens, select Disable from the Startup Type drop down list.
    Proceed to copy the folder which contains the backup to the new DHCP server. You now have to restore the DHCP backup at the destination DHCP server.
    Open the DHCP console.
    Right-click the destination DHCP server for which you want to restore the DHCP database, and select Restore from the shortcut menu.
    When the Browse For Folder dialog box opens, select the folder that contains the back up of the database that you want to restore. Click OK.
    Click Yes when prompted to restore the database, and to stop and restart the DHCP service.

If your lease information in the DHCP database does not correspond to the actual IP addresses leased to clients on the network, you can delete your existing database files, and commence with a clean (new) database. To do this,

    Stop the DHCP service.
    Remove all the DHCP database files from the systemrootsystem32DHCP folder.
    Restart the DHCP service.
    You can rebuild the contents of the database by reconciling the DHCP scopes. The DHCP console is used for this.

When DHCP database information is inconsistent with what is on the network, corrupt, or when information is missing, you can reconcile DHCP data for the scopes to recover the database. The DHCP service stores IP addresses lease data as follows:

    Detailed IP address lease information is stored in the DHCP database.
    Summary IP address lease information is stored in the DHCP database

These sets of information are compared when scopes are reconciled. Before you can reconcile the DHCP server's scopes, you first have to stop the DHCP service running on the server. You can repair any inconsistencies which are detected by the comparison between the contents of the DHCP database, and the contents of the Registry.
How to reconcile the DHCP database

    Open the DHCP console

    Right-click the DHCP server for which you want to reconcile the DHCP database, and then select Reconcile All Scopes from the shortcut menu. The Reconcile All Scopes command also appears as an Action menu item.

    When the Reconcile All Scopes dialog box opens, click Verify to start the DHCP database reconciliation process.
    When no inconsistencies are reported, click OK.
    When inconsistencies are detected, select the addresses which need to be reconciled, and then click Reconcile.
    The inconsistencies are repaired.

How to reconcile a single scope

    Open the DHCP console
    In the console tree, expand the DHCP server node that contains the scope which you want to reconcile.
    Right-click the scope and then select Reconcile from the shortcut menu.
    When the Reconcile All Scopes dialog box opens, click Verify to start the scope reconciliation process.
    When no inconsistencies are detected, click OK.
    When inconsistencies are detected, select the addresses which need to be reconciled, and then click Reconcile.
    The inconsistencies are repaired.