Microsoft infrastructure architecture considerations: part 2 (remote offices)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing from my earlier post which sets the scene for a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, in this post, I’ll look at some of the considerations for remote offices.

Geographically dispersed organisations face a number of challenges in order to support remote offices including: WAN performance/reliability; provisioning new services/applications/servers; management; remote user support; user experience; data security; space; and cost.

One approach that can help with some (not all) of these concerns is placing a domain controller (DC) in each remote location; but this has been problematic until recently because it increases the overall number of servers (it’s not advisable to co-locate other services on a domain controller because administration can’t be delegated to a local administrator on a domain controller and the number of Domain Admins should be kept to a minimum) and it’s a security risk (physical access to the domain controller computer makes a potential hacker’s job so much simpler). For that reason, Microsoft introduced read only domain controllers (RODCs) in Windows Server 2008.

There are still some considerations as to whether this is the appropriate solution though. Benefits include:

  • Administrative role separation.
  • Faster logon times (improved access to data).
  • Isolated corruption area.
  • Improved security.

whilst other considerations and potential impacts include:

  • The need for a schema update.
  • Careful RODC placement.
  • Impact on directory-enabled applications.
  • Possibility of site topology design changes.

Regardless of whether a remote office DC (either using the RODC capabilities or as a full DC) is deployed, then server sprawl (through the introduction of branch office servers for a variety of purposes) can be combatted with the concept of a branch “appliance” – not in the true sense of a piece of dedicated hardware runnings an operating system and application that is heavily customised to meet the needs of a specific service – but by applying appliance principles to server design and running multiple workloads in a manner that allows for self-management and healing.

The first step is to virtualise the workloads. Hyper-V is built into Windows Server 2008 and the licensing model supports virtualisation at no additional cost. Using the server core installation option, the appliance (physical host) management burden is reduced with a smaller attack surface and reduced patching. Multiple workloads may be consolidated onto a single physical host (increasing utilisation and removing end-of-life hardware) but there are some downsides too:

  • There’s an additional server to manage (the parent/host partition) and child/guest partitions will still require management but tools like System Center Virtual Machine Manager (SCVMM) can assist (particularly when combined with other System Center products).
  • A good business continuity plan is required – the branch office “appliance” becomes a single point of failure and it’s important to minimise the impact of this.
  • IT staff skills need to be updated to manage server core and virtualisation technologies.

So, what about the workloads on the branch office “appliance”? First up is the domain controller role (RODC or full DC) and this can be run as a virtual machine or as an additional role on the host. Which is “best” is entirely down to preference – running the DC alongside Hyper-V on the physical hardware means there is one less virtual machine to manage and operate (multiplied by the number of remote sites) but running it in a VM allows the DC to be “sandboxed”. One important consideration is licensing – if Windows Server 2008 standard edition is in use (which includes one virtual operating system environment, rather than enterprise edition’s four, or datacenter edition’s unlimited virtualisation rights) then running the DC on the host saves a license – and there is still some administrative role separation as the DC and virtualisation host will probably be managed centrally, with a local administrator taking some responsibility for the other workloads (such as file services).

That leads on to a common workload – file services. A local file server offers a good user experience but is often difficult to back up and manage. One solution is to implement DFS-R in a hub and spoke arrangement and to keep the backup responsibility data centre. If the remote file server fails, then replication can be used to restore from a central server. Of course, DFS-R is not always idea for replicating large volumes of data; however the DFS arrangement allows users to view local and remote data as though it were physically stored a single location and there have been a number of improvements in Windows Server 2008 DFS-R (cf. Windows Server 2003 R2). In addition, SMB 2.0 is less “chatty” than previous implementations, allowing for performance benefits when using a Windows Vista client with a Windows Server 2008 server.

Using these methods, it should be possible to avoid remote file server backups and remote DCs should not need to be backed up either (Active Directory is a multi-master replicated database so it has an inherent disaster recovery capability). All that’s required is some method of rebuilding a failed physical server – and the options there will depend on the available bandwidth. My personal preference is to use BITS to ensure that the remote server always holds a copy of the latest build image on a separate disk drive and then to use this to rebuild a failed server with the minimum of administrator intervention or WAN traffic.

In the next post in these series, I’ll take a look at some of the considerations for using network access protection to manage devices that are not compliant with the organisation’s security policies.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.