Processor area networking

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I was at a very interesting presentation from Fujitsu-Siemens Computers. It doesn’t really matter who the OEM was – it was the concept that grabbed me, and I’m sure IBM and HP will also be looking at this and that Dell will jump on board once it hits the mass market. That concept was processor area networking.

We’ve all got used to storage area networks (SANs) in recent years – the concept being to separate storage from servers so that a pool of storage can be provided as and when required.

Consider an e-mail server with 1500 users and 100Mb mailbox limits. When designing such a system, it is necessary to separate the operating system, database, database transaction logs, and message transfer queues for recoverability and performance. The database might also be split for fast recovery of VIP’s mailboxes but my basic need is to provide up to 150Gb of storage for the database (1500 users x 100Mb). Then another 110% storage capacity is required for database maintenance and all of a sudden the required disk space for the database jumps to 315Gb – and that doesn’t include the operating system, database transaction logs or message transfer queues!

Single instance storage might reduce this number, as would the fact that most users won’t have a full mailbox, but most designers will provide the maximum theoretical capacity “just in case” because to provision it later would involve: gaining management support for the upgrade; procuring the additional hardware; and scheduling downtime to provide the additional storage (assuming the hardware is able to physically accommodate the extra disks).

Multiply this out across an organisation and that is a lot of storage sitting around “just in case”, increasing hardware purchase and storage management costs in the process. Then consider the fact that storage hardware prices are continually dropping and it becomes apparent that the additional storage could probably have been purchased at a lower price when it was actually needed.

Using a SAN, coupled with an effective management strategy, storage can be dynamically provisioned (or even deprovisioned) on a “just in time” basis, rather than specifying every server with extra storage to cope with anticipated future requirements. No longer is 110% extra storage capacity required on the e-mail server in case the administrator needs to perform offline defragmentation – they simply ask the SAN administrator to provision that storage as required from the pool of free space (which is still required, but is smaller than the sum of all the free space on a all of the separate servers across the enterprise).

Other advantages include the co-location of all mission critical data (instead of being spread around a number of diverse server systems) and the ability to manage that data effectively for disaster recovery and business continuity service provision. Experienced SAN administrators are required to manage the storage, but there are associated manpower savings elsewhere (e.g. managing the backup of a diverse set of servers, each with their own mission critical data).

A SAN is only part of what Fujitsu-Siemens Computers are calling the dynamic data centre, moving away from the traditional silos of resource capability.

Processor area networking (PAN) extends takes the SAN storage concept and applies it to the processing capacity provided for data centre systems.

So, taking the e-mail server example further, it is unlikely that all of an organisation’s e-mail would be placed on a single server and as the company grows (organically or by acquisition), additional capacity will be required. Traditionally, each server would be specified with spare capacity (within the finite constraints of the number of concurrent connections that can be supported) and over time, new servers would be added to handle the growth. In an ideal world, mailboxes would be spread across a farm of inexpensive servers, rapidly bringing new capacity online and moving mailboxes between servers to marry demand with supply.

Many administrators will acknowledge that servers typically only average 20% utilisation and by removing all input/output (I/O) capabilities from the server, diskless processing units can be provided (effectively blade servers). These servers are connected to control blades which manage the processing area network, diverting I/O to the SAN or the network as appropriate.

Using such an infrastructure in a data centre, along with middleware (to provide virtualisation, automation and integration technologies) it is possible to move away from silos of resource and be completely flexible about how services are allocated to servers, responding to peaks in demand (acknowledging that there will always be requirements for separation by business criticality or security).

Egenera‘s BladeFrame technology is one implementation of processor area networking and last week, Fujitsu-Siemens Computers and Egenera announced an EMEA-wide deal to integrate Egenera Bladeframe technology with Fujitsu-Siemens servers.

I get the feeling that processor area networking will be an interesting technology area to watch. With virtualisation rapidly becoming accepted as an approach for flexible server provision (and not just for test and development environments), the PAN approach is a logical extension to this and it’s only a matter of time before PANs become as common as SANs are in today’s data centres.

2 thoughts on “Processor area networking

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.