Over the last few years, I’ve heard a lot about blade servers – mostly anecdotal – and mostly commenting that they produce a lot of heat so for all the rack space saved through their use, a lot of empty space needs to be left in datacentres that weren’t designed for today’s high-density servers. In recent weeks I’ve attended a number of events where HP was showcasing their new c-class blade system and it does look as if HP have addressed some of the issues with earlier systems. I’m actually pretty impressed – to the point where I’d seriously consider their use.
HP’s c-class blade system was introduced in June 2006 and new models are gradually coming on stream as HP replaces the earlier p-class blades, sold since 2002 (and expected to be retired in 2007).
The new c-class enclosure requires 10U of rackspace, which can be configured with up to 16 half-height server blades, 8 full-height server blades or a combination of the two. Next week, HP will launch direct-attached storage blades (as well as new server blades) and next year, they expect to launch shared storage blades (similar to the StorageWorks Modular Storage Array products). With the ability to connect multiple blade enclosures to increase capacity, an extremely flexible (and efficient) computing resource pool can be created.
At the back of the enclosure are up to 10 cooling fans, 6 power connections (three-phase connections are also available), 1 or 2 management modules, and up to 8 interconnect modules (e.g. Ethernet or fibre-channel pass-through modules or switches from HP, Cisco, Brocade and others).
There are a number of fundamental changes between the p-class and c-class blade systems. Immediately apparent is that each blade is physically smaller. This has been facilitated by moving all cooling fans off the blade itself and into the enclosure, as well as by the move from Ultra320 SCSI hard disks to small form-factor serial-attached SCSI (SAS) hard disks. Although the SAS disks currently have reduced storage capacity (compared with Ultra320 SCSI), a 146GB 10,000RPM SAS disk will be launched next week and 15,000RPM disks will be released in the coming months. Serial ATA (SATA) disks are also available, but not recommended for 24×7 operation. The new disks use a 2.5″ form factor and weigh significantly less than 3.5″ disks; consequently they require about half as much power to provide equivalent performance.
HP are keen to point out that the new cooling arrangements are highly efficient, with three separate airflows through the enclosure for cooling blades, power supplies and communications devices. Using a parallel, redundant and scalable (PARSEC) architecture, the airflows include back-flow preventers and shut-off doors such that if a component is not installed, then that part of the enclosure is not cooled. If the Thermal Logic control algorithm detects that management information is not available (e.g. if the onboard management module is removed) then the variable speed Active Cool fans will fail open and automatically switch to full power – it really is impressive to see just how much air is pulled through the system by the fans, which are not dissimilar to tiny jet engines!
Power is another area where improvements have been made and instead of using a separate power supply modele, hotswap power supply units are now integrated into the front of the enclosure. The Thermal Logic system will dynamically adjust power and cooling to meet energy budgets such that instead of running multiple supplies at reduced power, some supplies are run close to full capacity (hence more efficiently) whilst others are not used. If one power supply fails, then the others will take up the load, with switching taking around 1ms.
Each blade server is a fully-functional HP ProLiant industry standard server – in fact the BL model numbering system mirrors the ML and DL range, adding 100 to the DL number, so a BL480c blade is equivalent to a DL380 rack-mount server (which itself adds 10 to the ML number – in this case an ML370).
Looking inside a blade, it becomes apparent how much space in a traditional server is taken up by power and cooling requirements – apart from the disks at the front, most of the unit consists of the main board with CPUs and memory. A mezzanine card arrangement is used to provide network or fibre-channel ports, which are connected via HP’s Virtual Connect architecture to the interconnect modules at the rear of the enclosure. This is the main restriction with a blade server – if PCI devices need to be employed, then traditional servers will be required; however each half-height blade can accommodate two mezzanine cards (each up to 2 fibre-channel or 4 Gigabit Ethernet ports) and a full-height blade can accommodate three mezzanine cards. Half-height blades also include 2 network connections as standard and full-height have 4 network connections – more than enough connectivity for most purposes. Each blade has between 2 and 4 hard disks and the direct attached storage blade will provide an additional 6 drives (SAS or SATA) in a half-height blade.
One of the advantages of using servers from tier 1 OEMs has always been the management functionality that’s built in (for years I argued that Compaq ProLiant servers cost more to buy but had a lower overall cost of ownership compared with other manufacturer’s servers) and HP are positioning the new blades in a similar way – that of reducing the total cost of ownership (even if the initial purchase price is slightly higher). Management features included within the blade include the onboard administrator console, with a HP Insight display at the front of the enclosure and up to two management modules at the rear in an active-standby configuration. The insight display is based on technology from HP printers and includes a chat function, e.g. for a remote administrator to send instructions to an engineer (predefined responses can be set or the engineer can respond in free text, but with just up, down and enter buttons it would take a considerable time to do so – worse than sending a text message on a mobile phone!).
Each server blade has an integrated lights out (iLO2) module, which is channelled via the onboard administrator console to allow remote management of the entire blade enclosure or the components within it – including real-time power and cooling control, device health and configuration (e.g. port mapping from blades to interconnect modules), and access to the iLO2 modules (console access via iLO2 seems much more responsive than previous generations, largely due to the removal of much of the Java technology). As with ML and DL ProLiant servers, each blade server includes the ProLiant Essentials foundation pack – part of which is the HP Systems Insight Manager toolset; with further packs building on this to provide additional functionality, such as rapid deployment, virtual machine management or server migration.
The Virtual Connect architecture between the blades and the interconnect modules removes much of the cabling associated with traditional servers. Offering a massive 5Tbps of bandwidth, the backplane needs to suffer four catastrophic failures before a port will become unavailable. In addition, it allows for hot spare blades to be provisioned, such that if one fails, then the network connections (along with MAC addresses, worldwide port numbers and fibre-channel boot parameters) are automatically re-routed to a spare that can be brought online – a technique known as server personality migration.
In terms of the break-even point for cost comparisons between blades and traditional servers, HP claim that it is between 3 and 8 blades, depending on the connectivity options (i.e. less than half an enclosure). They also point out that because the blade enclosure also includes connectivity then its not just server costs that need to be compared – the blade enclosure also replaces other parts of the IT infrastructure.
Of course, all of this relates to HP’s c-class blades and it’s still possible to purchase the old p-class HP blades, which use a totally different architecture. Other OEMs (e.g. Dell and IBM) also produce blade systems and I’d really like to see a universal enclosure that works with any OEM’s blade – in the same way that I can install any standard rack-format equipment in (almost) any rack today. Unfortunately, I can’t see that happening any time soon…
2 thoughts on “Slicing server TCO with HP ProLiant server blades”