Microsoft infrastructure architecture considerations: part 7 (data centre consolidation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few days, I’ve written a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series. Just to summarise, the posts so far have been:

  1. Introduction.
  2. Remote offices.
  3. Controlling network access.
  4. Virtualisation.
  5. Security.
  6. High availability.

In this final infrastructure architecture post, I’ll outline the steps involved in building an infrastructure for data centre consolidation. In this example the infrastructure is a large cluster of servers to run a virtual infrastructure; however many of the considerations would be the same for a non-clustered or a physical solution:

  1. The first step is to choose build a balanced system – whilst it’s inevitable that there will be a bottleneck at some point in the architecture, by designing a balanced system the workloads can be mixed to even out the overall demand – at least, that’s the intention. Using commodity hardware it should be possible to provide a balance between cost and performance using the following configuration for the physical cluster nodes:
    • 4-way quad core (16 core in total) Intel Xeon or AMD Opteron-based server (with Intel VT/AMD-V and NX/XD processor support).
    • 2GB RAM per processor core minimum (4GB per core recommended).
    • 4Gbps Fibre Channel storage solution.
    • Gigabit Ethernet NIC (onboard) for virtual machine management and migration/cluster heartbeat.
    • Quad-port gigabit Ethernet PCI Express NIC for virtual machine access to the network.
    • Windows Server 2008 x64 enterprise or datacenter edition (server core installation).
  2. Ensure that Active Directory is available (at least one physical DC is required in order to get the virtualised infrastructure up and running).
  3. Build the physical servers that will provide the virtualisation farm (16 servers).
  4. Configure the SAN storage.
  5. Provision the physical servers using System Center Configuration Manager (again, a physical server will be required until the cluster is operational) – the servers should be configured as a 14 active/2 passive node failover cluster.
  6. Configure System Center Virtual Machine Manager for virtual machine provisioning, including the necessary PowerShell scripts and the virtual machine repository.
  7. Configure System Center Operations Manager (for health monitoring – both physical and virtual).
  8. Configure System Center Data Protection Manager for virtual machine snapshots (i.e. use snapshots for backup).
  9. Replicate snapshots to another site within the SAN infrastructure (i.e. provide location redundancy).

This is a pretty high-level view, but it shows the basic steps in order to create a large failover cluster with the potential to run many virtualised workloads. The basic principles are there and the solution can be scaled down or out as required to meet the needs of a particular organisation.

The MCS Talks series is still running (and there are additional resources to compliment the first session on infrastructure architecture). I also have some notes from the second session on core infrastructure that are ready to share so, if you’re finding this information useful, make sure you have subscribed to the RSS feed!

3 thoughts on “Microsoft infrastructure architecture considerations: part 7 (data centre consolidation)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.