Over the last few days, I’ve written a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series. Just to summarise, the posts so far have been:
- Remote offices.
- Controlling network access.
- High availability.
In this final infrastructure architecture post, I’ll outline the steps involved in building an infrastructure for data centre consolidation. In this example the infrastructure is a large cluster of servers to run a virtual infrastructure; however many of the considerations would be the same for a non-clustered or a physical solution:
- The first step is to choose build a balanced system – whilst it’s inevitable that there will be a bottleneck at some point in the architecture, by designing a balanced system the workloads can be mixed to even out the overall demand – at least, that’s the intention. Using commodity hardware it should be possible to provide a balance between cost and performance using the following configuration for the physical cluster nodes:
- 4-way quad core (16 core in total) Intel Xeon or AMD Opteron-based server (with Intel VT/AMD-V and NX/XD processor support).
- 2GB RAM per processor core minimum (4GB per core recommended).
- 4Gbps Fibre Channel storage solution.
- Gigabit Ethernet NIC (onboard) for virtual machine management and migration/cluster heartbeat.
- Quad-port gigabit Ethernet PCI Express NIC for virtual machine access to the network.
- Windows Server 2008 x64 enterprise or datacenter edition (server core installation).
- Ensure that Active Directory is available (at least one physical DC is required in order to get the virtualised infrastructure up and running).
- Build the physical servers that will provide the virtualisation farm (16 servers).
- Configure the SAN storage.
- Provision the physical servers using System Center Configuration Manager (again, a physical server will be required until the cluster is operational) – the servers should be configured as a 14 active/2 passive node failover cluster.
- Configure System Center Virtual Machine Manager for virtual machine provisioning, including the necessary PowerShell scripts and the virtual machine repository.
- Configure System Center Operations Manager (for health monitoring – both physical and virtual).
- Configure System Center Data Protection Manager for virtual machine snapshots (i.e. use snapshots for backup).
- Replicate snapshots to another site within the SAN infrastructure (i.e. provide location redundancy).
This is a pretty high-level view, but it shows the basic steps in order to create a large failover cluster with the potential to run many virtualised workloads. The basic principles are there and the solution can be scaled down or out as required to meet the needs of a particular organisation.
The MCS Talks series is still running (and there are additional resources to compliment the first session on infrastructure architecture). I also have some notes from the second session on core infrastructure that are ready to share so, if you’re finding this information useful, make sure you have subscribed to the RSS feed!