Maximising Active Directory performance and replication troubleshooting

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whenever I see that John Craddock and Sally Storey (from Kimberry Associates) are presenting a new seminar on behalf of Microsoft I try to attend. Not just because the content is well presented but because of the relevance of the material, some of which would involve wading through reams of white papers to find, but most importantly because, unlike most Microsoft presentations, there’s hardly any marketing material in there.

Last week, I saw John and Sally present on maximising Active Directory (AD) performance and in-depth replication troubleshooting. With a packed day of presentations there was too much good information (and detail) to capture in a single blog post, but what follows should be of interest to anyone looking to improve the performance of their AD infrastructure.

At the heart of Active Directory is the local security authority (LSA – lsass.exe), responsible for running AD as well as:

  • Netlogon service.
  • Security Accounts Manager service.
  • LSA Server service.
  • Secure sockets layer (SSL).
  • Kerberos v5 authentication.
  • NTLM authentication.

From examining this list of vital services, it becomes clear that tuning the LSA is key to maximising AD performance.

Sizing domain controllers (DCs) can be tricky. On the one hand, a DC can be very lightly loaded but at peak periods, or on a busy infrastructure, DC responsiveness will be key to the overall perception of system performance. The Windows Server 2003 deployment guide provides guidance on sizing DCs; however it will also be necessary to monitor the system to evaluate performance, predict future requirements and plan for upgrades.

Performance monitor is a perfectly adequate tool but running it can affect performance significantly so event tracing for Windows (ETW – as described by Matt Pietrek) was developed for use on production systems. ETW uses a system of providers to pass events to event tracing sessions in memory. These event tracing sessions are controlled by one or more controllers, logging events to files, which can be played back to consumers (or alternatively the consumers can operate real-time traces). Microsoft server performance advisor is a free download which makes use of ETW providing a set of predefined collectors.

Some of the basic items to look at when considering DC performance are:

  • Memory – cache the database for optimised performance.
  • Disk I/O – the database and log files should reside on separate physical hard disks (with the logs on the fastest disks).
  • Data storage – do applications really need to store data in Active Directory or would ADAM represent a better solution?
  • Code optimisation – how do the directory-enabled applications use and store their data; and how do they search?

In order to understand search optimisation, there are some technologies that need to be examined further:

  • Ambiguous name resolution (ANR) is a search algorithm used to match an input string to any of the attributes defined in the ANR set, which by default includes givenName, sn, displayName, sAMAccountName and other attributes.
  • Medial indexing (*string*) and final string searching (*string) are slow. Windows Server 2003 (but not Windows 2000 Server) supports tuple indexing, which improves perfomance when searching for final string character strings of at least three characters in length. Unfortunately, tuple indexing should be used sparingly because it does degrade performance when the indexes are updated.
  • Optimising searches involves defining a correct scope for the search, ensuring that regularly-searched attributes are indexed, basing searches on object category rather than class, adjusting the ANR as required and indexing for container and medial searches where required.
  • Logging can be used to identify expensive (more than a defined number of entries visited) and inefficient (searching a defined number of objects returns less than 10% of the entries visited) searches using the 15 Field Engineering value in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics (as described in Microsoft knowledge base article 314980), combined with HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Expensive Search Results Threshold (default is 10000) and HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Inefficient Search Results Threshold (default is 1000).

However performant the directory is at returning results, the accuracy of those results is dependent upon replication (the process of making sre that the directory data is available throughout the enterprise).

The AD replication model is described as multimaster (i.e. changes can be made on any DC), loosely consistent (i.e. there is latency between changes being made and their availability throughout the enterprise, so it is impossible to tell if the directory is completely up-to-date at any one time – although urgent changes will be replicated immediately) and convergent (i.e. eventually all changes will propagate to all DCs, using a conflict resolution mechanism if required).

When considering replication, a topology and architecture needs to be considered that matches performance with available bandwidth, minimises latency, minimises replication traffic and responds appropriately to loosely connected systems. In order to match performance with available bandwidth and provide efficient replication, it is important to describe the network topology to AD:

  • Sites are islands of good connectivity (once described as LAN-speed connections, but more realistically areas of the network where replication traffic has no negative impact).
  • Subnet objects are created to associate network segments with a particular site (e.g. so that a Windows client can locate the nearest DC for logons, directory searching and DFS paths – known as site affinity). Client affinity is also used for logons, directory searching and DFS paths.
  • Site links characterise available bandwidth and cost (in whatever unit is chosen – money, latency, or something else).
  • Bridgehead servers are nominated (for each NC) to select the DC used for replication in an out of a site when replicating all NCs between DCs and replicating the system volume between DCs.

A communications transport is also required. Whilst intrasite communications use RPC for communications, intersite communications (i.e. over site links) can use IP (RPC) for synchronous inbound messaging or SMTP for asynchronous messaging; however SMTP can not be used for the domain NC, making SMTP useful when there is no RPC connection available (e.g. firewall restrictions) but also meaning that RPC is required to build a forest. Intersite communications are compressed by default but Windows 2000 Server and Windows Server 2003 use different compression methods – the Windows Server 2003 version is much faster, but does not compress as effectively. In reality this is not a problem as long as the link speed is 64Kbps or greater but there are also options to revert to Windows 2000 Server compression or to disable compression altogether.

Site link bridges can be used to allow transitive replication (where a DC needs to replicate with a its partner via another site). The default is for all links to be bridges; however there must be a valid schedule on both portions of the link in order for replication to take place.

The knowledge consistency checker (KCC), which runs every 15 minutes by default, is responsible for building the replication topology, based on the information provided in the configuration container. The KCC needs to know about:

  • Sites.
  • Servers and site affinity.
  • Global catalog (GC) servers.
  • Which directory partitions are hosted on each server.
  • Site links and bridges.

For intrasite replication, the KCC runs on each DC within a site and each DC calculates its own inbound replication partners, constructing a ring topology with dual replication paths (for fault tolerance). The order of the ring is based on the numerical value of the DSA GUID and the maximum hop count between servers is three, so additional optimising connectors are created where required. Replication of a naming context (NC) can only take place via servers that hold a copy of that NC. In addition, one or more partial NCs will need to be replicated to a GC.

Because each connection object created by the KCC defines an inbound connection from a specific source DC, the destination server needs to create a partnership with the source in order to replicate changes. This process works as follows:

  1. The KCC creates the required connections.
  2. The repsFrom attribute is populated by the KCC for all common NCs (i.e. the DC learns about inbound connections).
  3. The destination server requests updates, allowing the repsTo attribute to be populated at the source.

The repsTo attribute is used to send notifications for intrasite replication; however all DCs periodically poll their partners in case any changes are missed, based on a schedule defined in the NTDS Settings object for the site and a system of notification delays to avoid all servers communicating changes at the same time.

Windows 2000 Server stores the initial notification delay (default 5 minutes) and subsequent notification delay (default 30 seconds) in the registry; whereas Windows Server 2003 stores the initial notification delay (default 15 seconds) and subsequent notification delay (default 3 seconds) within AD (although individual DCs can be controlled via registry settings). This is further complicated by the fact that Windows 2000 Server DCs upgraded to Windows Server 2003 and still running at Windows 2000 forest functionality will take the Windows 2000 timings until the forest functionality mode is increased to Windows Server 2003. This means that with a three hop maximum between DCs, the maximum time taken to replicate changes from one DC within a site to another is 45 seconds for Windows Server 2003 forests and 15 minutes for Windows 2000 Server forest (based on three times the initial notification delay).

For certain events, a process known as urgent replication (with no initial notification delay) is invoked. Events that trigger urgent replication are:

  • Account lockout.
  • Changes to the account lockout policy or domain password policy.
  • Changes to the computer account.
  • Changes to domain trust passwords.

Some changes are immediately sent to the PDC emulator via RPC (a process known as immediate replication) and most of these changes also trigger urgent replication:

  • User password changes.
  • Account lockout.
  • Changes to the RID master role.
  • Changes to LSA secrets.

For intersite replication, one DC in each site is designated as the inter-site topology generator (ISTG). The KCC runs on the ISTG to calculate the inbound site connections and the ISTG automatically selects bridgehead servers. By default, notifications are not used for intersite replication (which relies on a schedule instead); however it is also possible to create affinity between sites with a high bandwidth backbone connection by setting the least significant bit of the site link’s option attribute to 1.

Be aware that schedules are displayed in the local time, so if configuring schedules across time zones be aware that the time in once site will not match the time in the other. Also be aware that deleting a connector may orphan a DC (e.g. if replication has not completed fully and it has insufficient knowledge of the topology to establish a new connection).

Once the replication topology is established, a server needs to know what information it needs to replicate to its partners. This needs to include:

  • Just the data that has been changed.
  • All outstanding changes (even if a partner has been offline for an extended period).
  • Alternate replication paths (without data duplication).
  • Conflict resolution.

Each change to directory data is recorded as a unique sequence number (USN), written to the metadata for each individual attribute or link value. The USN is used as a high watermark vector for each inbound replication partner (and each NC), identified by their DSA GUID. The source server will sent all changes that have a higher USN. Because replication works on a ring topology, a process is required to stop unnecessary replication. This is known as propagation dampening and relies on another value called the up-to-dateness vector (one for each DC where the information originated). This is used to ensure that the source server does not send changes that have already been received. The highest committed USN attribute holds the highest USN used on a particular server.

It is possible for the same attribute to be simultaneously updated at multiple locations so each DC checks that the replicated change is “newer” than the information it hold before accepting a change. It determines which change is more up-to-date, based on the replica version number, then the originating time stamp, and finally on the originating invocation ID (as a tie break).

Other replication issues include:

  • If an object is added to or moved to a container on one DC as the container is deleted on another DC then the object will be places in the LostAndFound container.
  • Adding or moving objects on different DCs can result in two objects with the same distinguished name (DN). In this case the newer object is retained and the other object name is appended with the object GUID.

It’s worth noting that when running in Windows Server 2003 forest functional mode significant reductions in replication traffic can be provided as changes to multi-value objects (e.g. group membership) are replicated at the value level rather than the attribute level. Not only does this reduce replication traffic but it allows groups to be created with more than 5000 users and avoids data loss when a group membership is edited on multiple DCs within the replication latency period.

If this has whetted your appetite for tuning AD (or if you’re having problems troubleshooting AD) then I recommend that you check out John and Sally’s Active Directory Forestry book (but beware – the book scores “extreme” on the authors’ own “geekometer scale”).

One thought on “Maximising Active Directory performance and replication troubleshooting

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.