Developing IT architecture skills

In a recent post, I looked at the topic of IT architecture from the perspective of what it is and why it’s not all about technology. For brevity, I decided to hold off on the next logical step, which developing the associated skills. So now it’s time to revisit the topic.

What skills does an Architect need?

In my first post, I made the distinction between Technical, Solution and Enterprise Architects and it’s clear that the skills, experience and knowledge will vary as across roles. There are some common things we can look at though, including:

  • Industry skills frameworks.
  • Formal certifications.
  • Other skills and knowledge.

It’s quite common to talk about T-shaped individuals, as they grow out of their domain expertise and gain some breadth. An Architect really does need to be more than T-shaped. They need to know a little bit about a lot of things – to be comb-shaped. If it’s not immediately clear what I mean, imagine a comb, with each tooth representing some knowledge on a different topic.

Skills Framework for the Information Age (SFIA)

Skills Framework for the Information Age (SFIA) claims to be “the global skills and competency framework for a digital world”. I’m not a fan. In my opinion, it’s overly wordy and feels like it was designed by committee. It’s also true that part of the reason I chose to leave a previous employer was a decision about SFIA grading, so I may be bitter.

My current employer also uses SFIA, and I can see the advantages when selling into the UK public sector. And I do have to concede that the SFIA skills definitions are actually useful when defining Solution Architect and Enterprise Architect roles:

It’s interesting that Enterprise and Business Architecture is labelled STPL. Perhaps it was called something like Strategic Planning in the past? That would be very appropriate (although Strategic Planning is labelled ITSP… so who knows).

There are other skills that may be useful for an architect. Candidates include:

And, depending on the organisation (e.g. working for a service provider, rather than in-house), maybe Consultancy (CNSL) too.

But this is where SFIA falls down. It’s so broad and encompassing that we don’t know where to stop: IT Management (ITMG)? Innovation (INOV)? Information Security (SCTY)? We could spend forever assessing people against many skills, but does it really move us forwards?

What SFIA does though, is provide a reference for what these terms mean and some pointers of what to expect at different levels.

Digital, Data and Technology (DDaT) Professional Capability Framework

Because we can never have too many standards, the UK Government came up with another framework for defining roles and the skills that are needed to perform them. That framework is known as DDaT – the Digital, Data and Technology Professional Capability Framework. Unlike, SFIA, DDaT is quite light, but possibly more relevant to the “digital” world.

DDaT doesn’t define a Solution Architect or an Enterprise Architect. Instead, it further muddies the water by defining various levels of Technical Architect, as well as Data Architect, Network Architect, and Security Architect. It also has a role of Technical Specialist Architect which seems to be some kind of ill-defined catch-all. I would suggest that the DDaT Technical Specialist Architect is some sort of specialist and probably not an Architect!

The “value” of certifications

Certifications are not everything, but they can help demonstrate having achieved a certain level of knowledge/experience. It’s worth noting that certifications are important to managed service partners as they are a factor in the partner competencies that a company holds. For some roles, they may also be important at an individual level, but they become less so as we tread the Architect development path.

Any technical certifications required for an architecture role will depend on context. The team I lead works predominantly with clients who have significant investments in Microsoft technology. For this reason, I ask every Architect (at all levels) to study for and pass all of the Microsoft Fundamentals series of exams.

At the time of writing there are six exams:

A seventh exam is in beta:

Some team members may have other certifications, but this is the base level. If I led an architecture practice working with clients who had a different focus, the list might have been focused on Amazon Web Services (AWS) or another technology stack entirely.

It’s also helpful to have awareness of competing technologies. In my case, I want the Microsoft-focused Architects to also have a foundational knowledge of AWS. That’s why I became an AWS Certified Cloud Practitioner (CCP).

But this is where the technical list ends. There is a Microsoft certification named Microsoft Azure Solutions Architect Expert but I consider it to be an oxymoron. It is my experience that the exams that lead to the award of this certification focus too much on detailed knowledge of Azure (the expert part) and have very little to do with architecture. They have breadth and depth within a single domain and in order to maintain the knowledge, it would be necessary to spend a lot of time working with the technology. That is certainly valuable in a technical role but it is not solution architecture (as discussed in my previous post).

Whilst the BCS has some examples of solution architecture certifications, they require attendance of formal training, which drives up the cost considerably. In fairness, there is a self-study option but it has no published learning path – just a reading list. Aside from these, I’ve struggled to identify good solution architecture certifications (I’m open to suggestions) but there is lots of reading that can be done.

One such area is around enterprise integration patterns but it’s worth absorbing the contents of the major cloud providers’ Architecture Centers too:

It’s also worth gaining some experience in other domains – for example, IT Service Management. This is why I have included ITIL Foundation certification on the development path for my team. I’m also considering whether a Business Analysis certification might be worthwhile, as well as knowledge of something like Lean Six Sigma.

And then there’s knowledge of Enterprise Architecture frameworks. It’s now 9 years since I took my TOGAF exam and I’ve not really had much opportunity to deploy that knowledge. Far more useful has been building out a set of artefacts around Svyatoslav Kotusev (@Kotusev)’s Enterprise Architecture on a Page.

I was also a Chartered IT Professional (CITP) for a while. Unfortunately, I found that it was not recognised or required by clients and it offered few, if any, benefits. It cost me quite a lot (both to become certified and for the ongoing fees). I couldn’t justify it to myself, so why expect my employer to pay? For that reason, I let it lapse.

It’s probably clear by now that I have mixed feelings about the value of certification for Architects. It can certainly help in some areas and it can demonstrate some knowledge (literally, getting the badge). But, for a Solution or Enterprise Architect, I’d be more interested to hear about their methods and approaches than the certifications they hold.

Other skills and knowledge

Soft skills are critical to success as an Architect and these are wide-ranging. First on the list is excellent written and verbal communication skills. The Architect needs to be able to present confidently, to a wide audience – both technical and non-technical. They need to forge relationships, manage stakeholders and exercise powers of persuasion and influence. They also need to be commercially aware. There’s a really good list of soft skills for architects in this post by Arnon Rotem-gal-oz (@arnonrgo). Arnon’s post is thirteen years old now but seems to have stood the test of time. In it, he outlines six key areas:

  • Leadership: acting as a technical manager for a project, influencing, guiding and directing others (who are often not direct reports) towards the same shared vision.
  • Systems thinking: understanding the “big picture” as well as the interdependence and interactions between components – how they operate independently and as a whole.
  • Strategic thinking: understanding the context that the solution sits within – how it supports the organisation’s goals and meets different stakeholders’ needs – balancing these needs to create a robust solution.
  • Organisational politics: Architects are logical thinkers – their decision making is based on analysing a set of options and picking the best one – except that in any organisation there will be other factors that apply non-rational influences to the decision making processes.
  • Communications: I’ve mentioned this already, but I cannot understate its importance. It’s not just about writing and presenting too – storytelling and negotiation are also key.
  • Human relations: covering team dynamics (how the team behaves at different stages of development), personal dynamics (motivations and personality types) and pragmatism (recognising when to let others have their way).

When I was writing this, I asked Matt Ballantine @Ballantine70 for his input. Matt reminded me of a post he wrote about CIO traits you won’t see on a job spec: curiosity; humility; and empathy. These are just as relevant to an Architect.

We need the curiosity to explore new things – not just new technologies but also new ways of doing business. Question why things are done as they are. What if we changed this? Could a new process help there? Could technology replace this manual process? What about our business partners and the way we integrate with them? Do the services we provide meet the needs of our end customers?

Humility is important too. The point is that the Architect is not an expert. They know a lot and they know how to put the pieces together – or if not, they can work out an approach. But it’s fine to say “I don’t know” sometimes. Actually, say “I don’t know but I have an idea how to find out” – and don’t do it too often. A little humility is good, but not so much that it calls your ability into doubt.

Empathy is essential. Understand the impact that what you’re doing has on others. Maybe the reason they are resistant to the change you want to implement is that they are concerned about the impact. Try to see things from other perspectives: is there another way to do it? Can we change something to make it more acceptable?

Back in 2017, I wrote about people change management and the ADKAR model. We need to understand people’s motivations in order to drive successful changes. Take them on a journey and help them to adapt. As Matt says in his post:

“If you are going to be successful in delivering new things, you have to be able to manage the balance of changing technologies and changing people’s behaviours.”

Ideal CIO Personality Traits, Matt Ballantine

Creating a development path

So I’ve come up with a huge shopping list of skills for our “comb-shaped” Architect and, clearly, they won’t all appear overnight. It may be useful to think about architecture skills in the form of a development path.

The first thing to understand is where we’re coming from. What skills does the person who wants to be an Architect already possess? Have they got experience in another role that could be relevant – perhaps they’ve already worked in IT Service, or Project Management, or as a Business Analyst?

Once you know the starting point, look at the target. Define the skills and experience that are needed, identify the gaps, and plan to fill them.

There’s a common misconception that Architects need to be über-technical but I would say not. At this point I should say this is not my first rodeo. (I have been burned by this point on Twitter in the past). Some people will argue that domain knowledge is needed – and that is true, but not at a deep level. And technology can be learned.

In my experience, some Architects can find it difficult to extract themselves from technology. Whereas taking someone from a different discipline (such as Business Analysis or Technical Project Management) and training them to understand how the parts fit together can work well.

What should the development path include? I’d suggest that there should be a mixture of things, but it could be something like this:

Technical Architect

A Technical Architect has broad domain knowledge. For example a Software Architect should look to develop an understanding of software development tools, methods and approaches.

Their experience may expand across multiple industries/market sectors.

They may have expertise in one technology stack (e.g. Microsoft .NET or Java) but they should be actively looking to increase their breadth and this will necessarily impact their depth of knowledge.

They should be able to communicate concepts and ideas to technical and non-technical stakeholders in a manner that’s easily understood, and they should understand the impact of their part of the solution on others.

Solution Architect

Able to guide the design and development of solutions that meet current and future business needs, eliciting requirements and determining the necessary logical components to create a solution.

A Solution Architect will take account of relevant architectures, strategies, policies, standards and practices. They will also ensure that existing and planned solution components remain compatible.

They will have experience across multiple domains and, normally, experience across multiple industries/sectors.

They will have excellent written and verbal communications skills. They will be able to present complex information, clearly, to both technical and non-technical audiences. They should demonstrate many of the soft skills discussed above.

They may also have experience of IT Service Management, Business Analysis or another discipline.

Enterprise Architect

Able to create and maintain business and enterprise architectures including the key principles, methods and models that describe the organisation’s future state, and that enable its evolution. These architectures support the formation of the constraints, standards and guiding principles necessary to define, assure and govern the evolution, and so facilitate change in the organisation’s structure, business processes, systems and infrastructure in order to achieve a predictable transition to the intended state.

An Enterprise Architect will interpret business goals and drivers; and translate business strategy and objectives into an “operating model”. This may include: the strategic assessment of current capabilities; the identification of required changes in capabilities; and the description of relationships between people, organisation, service, process, data, information, technology and the external environment.

An EA will have experience with one or more EA frameworks and will regularly make use of associated artefacts.

They should have excellent leadership skills, and should be able to inspire people in transforming the organisation to move to the target operating model. They must be comfortable in discussions with the C-Suite. To do this, they will need a complete toolkit of soft skills and the ability to deploy them appropriately.

In conclusion…

I hope this (rather long) post has given some insight into some of the skills that may be useful for a career in (IT) architecture and how to develop them.

Please let me know your thoughts – especially if you know of resources that can be useful to others who are developing skills as an architect.

Thanks to Matt Ballantine for his help in collecting my thoughts and ideas into something that almost flows (even if it is a bit long for a blog post).

Featured image by Crisco 1492 from Wikimedia Commons, licenced under a Creative Commons Attribution-Share Alike 4.0 International licence (CC BY-SA 4.0).

What is IT architecture?

I’ve been having a few conversations recently about what IT architecture is – both from the perspective of developing the practice that I run at risual and from conversations with others who are looking to develop an IT architecture capability in their organisations. It’s become abundantly clear to me that the term means different things to different people and is often abused. And that’s without considering that leading figures in construction would argue that IT folks shouldn’t be using the name “Architect” at all!

To be fair, it seems that even (construction) Architects are arguing over whether someone is an Architect or not. Which is somewhat amusing to me as the collective noun for an architect is… an “argument of Architects”. I kid you not!

In IT, for some organisations, becoming an Architect has become the term for a senior technical person who understands large chunks of the organisation’s IT. But that’s not what it should be. Architecture is not design. But design is part of architecture. Confused? Bear with me, please.


Let’s look at the definition of the word “architecture”:

architecture /???k?t?kt??/ noun

1. the art or practice of designing and constructing buildings.”schools of architecture and design”

[e.g.] the style in which a building is designed and constructed, especially with regard to a specific period, place, or culture. “Georgian architecture”.

2. the complex or carefully designed structure of something. “the chemical architecture of the human brain.

[e.g.] the conceptual structure and logical organization of a computer or computer-based system.

Origin: mid 16th century: from Latin architectura, from architectus.

Oxford Languages

So there we have it, architecture is about conceptual structures and logical organisation. It is not about detailed design. Ergo, an IT Architect, is not just an experienced technician.

Unfortunately though, if we look at the definition of an “architect” it all starts to fall apart. The noun for the computing sense of the word is defined as “a person who designs hardware, software, or networking applications and services of a specified type for a business or other organization” and the verb (if you can consider that architecting is really a thing – I don’t!) is “design and configure (a program or system)”. Basically, in computing, we seem to be using architect as a synonym for designer!

Different types of architect

Maybe looking at definitions of words was not as helpful as it should have been. So let’s consider the types of Architect we have in the IT world:

Technical Architect

Sometimes referred to as a Domain Architect (reflecting that their knowledge relates to a particular area, or domain), the Technical Architect is probably the most common use of the architecture term in IT.

There may be variations reflecting a given domain – for example Software Architect, Data Architect, Infrastructure Architect, Security Architect.

Each of these will be able to take a logical building block (see Solution Architect) and define how this part of the solution should be achieved.

There will be elements of design – but that design should be broader than just technology and will often rely on specialists for expert knowledge.

Solution Architect

Solution Architects are concerned with solutions. By that, I mean that they view a problem as a set of requirements that need to be solved by a combination of people, business processes or technology and come up with a solution.

The solution will be constructed of logical building blocks. For example, a requirement to create a website may include components such as:

  • Identity and access
  • Mobile application/browser
  • Web server
  • Database server
  • Operating system
  • Hardware
  • Network

Note that none of these are technologies. It doesn’t say Active Directory, Chrome, Apache and MySQL, running on Linux on VMware and with TCP/IP over Ethernet using Cisco switches and routers. Those technical decisions may be taken as to how to address the requirements and fill the building blocks, but the logical blocks themselves are conceptual – not detail!

The Solution Architect will lead experts and combine their recommendations/experience into a coherent solution to a business problem. They should also consider the whole lifecycle of the solution, from development, into service (operation) and. eventually, retirement.

This post from CIO Magazine is one view of a Solution Architect, although it could be considered to be describing a little bit of a unicorn…

Enterprise Architect

Often, architecture at enterprise scale is mistakenly referred to as Enterprise Architecture (EA) but Enterprise Architecture is a discipline in its own right. A enterprise-scale solution still requires a Solution Architect.

The Enterprise Architect is (or should be) interested in:

  • Business
  • Data
  • Application systems
  • Technology

Note that technology is just a small part of that (and last to be listed), although technology is probably pervasive in the others too.

I’ve read some great posts recently on what an Enterprise Architect is – so instead of re-defining it here, I’ll signpost to these:

There are recognised EA frameworks, such as TOGAF and the Zacmann Framework but one of the easiest ways to understand the work of an EA is to look at Svyatoslav Kotusev (@Kotusev)’s Enterprise Architecture on a Page.

Often, the Enterprise Architects are not part of the IT function (but the Technical and Solution Architects are). Instead, the EAs are part of the organisation’s strategic planning function.

In reality, many people will move up and down this spectrum. Sometimes, a Solution Architect will have to “roll up their sleeves” and help out with a technical situation. Or maybe they will elevate themselves into the some of the Enterprise Architecture role – particularly if there is no formal Enterprise Architecture function within an organisation. But two things are clear in my mind:

  • Architecture is far more than just design.
  • Enterprise Architecture is not the same as architecture-at-enterprise-scale.

Sitting in ivory towers?

I’ve seen technical people with disdain for Architects because they consider them to have little practical understanding of their world. Similarly, I’ve seen “Architects” struggle to extract themselves from the mire of technical detail – particularly if that’s what they love to work with.

It’s important to be mindful of the relevance of the architecture work that’s taking place. Principles are all well and good if they make sense and are followed – but, if the Architects sit in an ivory tower generating paperwork that is of no relevance to the rest of the organisation, they will be doomed to failure. Similarly, we talk a lot about technical debt – sometimes that debt needs to be paid off, and sometimes it’s there for a good reason.

Perhaps a better way to look at it is this view, from the Enterprise Architecture Professional Journal, in which the author cautions against the argument of Architects becoming an arrogance of Architects.

Like so many things in the world of work, communication is key. And developing good architecture skills requires good written and verbal communication skills. Then there’s stakeholder management, building relationships, commercial awareness and much more.

Next time…

This post is long enough already and the topic of developing skills is pretty involved. So, in my next post, I’ll look at some ways we can develop (IT) architecture skills – and at why there seems to be so little value in (IT) architecture certifications.

Featured image by Donna Kirby from Pixabay.

Architecture for the Microsoft platform: defining standards, principles, reference architecture and patterns

BlueprintIT architecture is a funny old game… you see, no-one does it the same way. Sure, we have frameworks and there’s a lot of discussion about “how” to “architect” (is that even a verb?) but there is no defined process that I’m aware of and that’s broadly adopted.

A few years ago, whilst working for a large systems integrator, I was responsible for parts of a technology standardisation programme that was intended to use architecture to drive consistency in the definition, design and delivery of solutions. We had a complicated system of offerings, a technology strategy, policies, architectural principles, a taxonomy, patterns, architecture advice notes, “best practice”, and a governance process with committees. It will probably come as no surprise that there was a fair amount of politics involved – some “not invented here” and some skunkworks projects with divisions defining their own approach because the one from our CTO Office “ivory tower” didn’t fit well.

I’m not writing this to bad-mouth a previous employer – that would be terribly bad form – but I honestly don’t believe that the scenario I’ve described would be significantly different in any large organisation. Politics is a fact of life when working in a large enterprise (and some smaller ones too!). And what we created was, at its heart, sound. I might have preferred a different technical solution to manage it (rather than a clunky portfolio application based on SharePoint lists and workflow) but I still think the principles were solid.

Fast-forward to 2016 and I’m working in a much smaller but rapidly-growing company and I’m, once again, trying to drive standardisation in our solutions (working with my peers in the Architecture Practice). This time I’m taking a much more lightweight approach and, I hope, bringing key stakeholders in our business on the journey too.

We have:

  • Standards: levels of quality or attainment used as a measure or model. These are what we consider as “normal”.
  • Principles: fundamental truths or propositions that serve as a foundation for a system or behaviour. These are the rules when designing or architecting a system – our commandments.

We’ve kept these simple – there are a handful of standards and around a dozen principles – but they seem to be serving us well so far.

Then, there’s our reference architecture. The team has defined three levels:

  • An overall reference model that provides a high level structure with domains around which we can build a set of architecture patterns.
  • The technical architecture – with an “architecture pattern” per domain. At this point, the patterns are still technology-agnostic – for example a domain called “Datacentre Services” might include “Compute”, “Storage”, “Location”, “Scalability” and so on. Although our business is purely built around the Microsoft platform, any number of products could theoretically be aligned to what is really a taxonomy of solution components – the core building blocks for our solutions.
  • “Design patterns” – this is where products come into play, describing the approach we take to implementing each component, with details of what it is, why it would be used, some examples, one or more diagrams with a pattern for implementing the solution component and some descriptive text including details such as dependencies, options and lifecycle considerations. These patterns adhere to our architectural standards and principles, bringing the whole thing full-circle.

It’s fair to say that what we’ve done so far is about technology solutions – there’s still more to be done to include business processes and on towards Enterprise Architecture but we’re heading in the right direction.

I can’t blog the details here – this is my personal blog and our reference architecture is confidential – but I’m pleased with what we’ve created. Defining all of the design patterns is laborious but will be worthwhile. The next stage is to make sure that all of the consulting teams are on board and aligned (during which I’m sure there will be some changes made to reflect the views of the guys who live and breathe technology every day – rather than just “arm waving” and “colouring in” as I do!) – but I’m determined to make this work in a collaborative manner.

Our work will never be complete – there’s a balance to strike between “standardisation” and “innovation” (an often mis-used word, hence the quotation marks). Patterns don’t have to be static – and we have to drive forward and adopt new technologies as they come on stream – not allowing ourselves to stagnate in the comfort of “but that’s how we’ve always done it”. Nevertheless, I’m sure that this approach has merit – if only through reduced risk and improved consistency of delivery.

Image credit: Blueprint, by Will Scullin on Flickr (used under a Creative Commons Attribution 2.0 Generic licence).

Technology standardisation – creating consistency in solution architecture

One of the things about my current role is that I can’t really blog much about what I do at work – it’s mostly not suitable for sharing outside the company.  That’s why I was pleased when my manager suggested I create a white paper outlining the technology standardisation approach that Fujitsu takes in the UK and Ireland. That is, in a nutshell, what I’ve been working to establish for the last year.

The problem is that, without careful control, the inherent complexity of integrating products and services from a variety of sources can be challenging and costly. Solution architects and designers are trained to create innovative solutions to problems but, all too often, those solutions involve bespoke elements or unproven technologies that increase risk and drive up the cost of delivery. At the same time, there are pressures to reduce costs whilst maintaining business benefit – pressures that run completely contrary to the idea of bespoke systems designed to meet each and every customer’s needs. Part of the answer lies in standardisation – or, as I like to think of it, creating consistency in solution architecture.

My technology standardisation paper was published last month and can be found on the Fujitsu UK website.

I’ll be moving on to something new in a short while (watch this space and I’m sure I’ll be able to talk about it soon), so it’s great to look back and say “that’s what I’ve been doing”.

Where’s the line between [IT] architecture and design?

This week, I’m attending a training course on the architecture and design of distributed enterprise systems and yesterday morning, somewhat mischievously, I asked the course instructor where he draws the distinction between architecture and design.

We explored an analogy based about a traditional (building) architect in which I suggested an architect knows the methods and materials to use but would not actually carry out the construction work. But the instructor, Selvyn Wright, made an interesting point – if we admire a building for it’s stunning architecture, often we’re talking about the design. Instead, he suggested that architecture is a style or philosophy whereas design is about the detail.

In the UK IT industry, the last 10 years or so have seen a trend towards “architect” job roles in business and IT. Maybe this is because the UK is one of the few countries where to be a great engineer is discouraged and technical skills are undervalued, rather than being held up as a worthy profession.

In reality, there is a grey line between an architect and a designer and I had frequent conversations with my previous manager on the topic (usually whilst discussing my own development on the path towards enterprise architecture – another commonly mis-used term, probably best saved for another post). Regardless, architecture and design do get mixed (in an IT context) and there comes a point in the transition when one can look back and say “ah yes, now I understand”.

I suggest that point is when someone is able to abstract the logical capabilities of a system from the technology itself. At that point, they’ve probably crossed the line from a designer to an architect.

In other words, an architect can understand the business problem from a logical perspective and create one or more possible solutions. The required capabilities are the logical model and the physical elements are those which need be bought, built or reused to match those capabilities. Architecture is about recognising and employing patterns.

Having decided what an architect is, we come to the issue of design. Indeed, design is a word more often used in a creative sense – a website designer, for example, has a distinct set of skills that are very different to those of a website developer. Ditto for a user interface designer or for a user experience designer. Within the context of the architect vs. designer debate, however, a designer can be viewed as someone whose task it is to work out how to configure the physical elements of the solution and create designs for elements of software applications, or of IT infrastructure.

Architect or designer, either way, I still struggle with the term “architected”. Designed seems to be a more grammatically correct use of English (adding further complexity to the design vs. architecture debate) but it seems increasingly common to architect (as a verb) these days…

Microsoft infrastructure architecture considerations: part 7 (data centre consolidation)

Over the last few days, I’ve written a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series. Just to summarise, the posts so far have been:

  1. Introduction.
  2. Remote offices.
  3. Controlling network access.
  4. Virtualisation.
  5. Security.
  6. High availability.

In this final infrastructure architecture post, I’ll outline the steps involved in building an infrastructure for data centre consolidation. In this example the infrastructure is a large cluster of servers to run a virtual infrastructure; however many of the considerations would be the same for a non-clustered or a physical solution:

  1. The first step is to choose build a balanced system – whilst it’s inevitable that there will be a bottleneck at some point in the architecture, by designing a balanced system the workloads can be mixed to even out the overall demand – at least, that’s the intention. Using commodity hardware it should be possible to provide a balance between cost and performance using the following configuration for the physical cluster nodes:
    • 4-way quad core (16 core in total) Intel Xeon or AMD Opteron-based server (with Intel VT/AMD-V and NX/XD processor support).
    • 2GB RAM per processor core minimum (4GB per core recommended).
    • 4Gbps Fibre Channel storage solution.
    • Gigabit Ethernet NIC (onboard) for virtual machine management and migration/cluster heartbeat.
    • Quad-port gigabit Ethernet PCI Express NIC for virtual machine access to the network.
    • Windows Server 2008 x64 enterprise or datacenter edition (server core installation).
  2. Ensure that Active Directory is available (at least one physical DC is required in order to get the virtualised infrastructure up and running).
  3. Build the physical servers that will provide the virtualisation farm (16 servers).
  4. Configure the SAN storage.
  5. Provision the physical servers using System Center Configuration Manager (again, a physical server will be required until the cluster is operational) – the servers should be configured as a 14 active/2 passive node failover cluster.
  6. Configure System Center Virtual Machine Manager for virtual machine provisioning, including the necessary PowerShell scripts and the virtual machine repository.
  7. Configure System Center Operations Manager (for health monitoring – both physical and virtual).
  8. Configure System Center Data Protection Manager for virtual machine snapshots (i.e. use snapshots for backup).
  9. Replicate snapshots to another site within the SAN infrastructure (i.e. provide location redundancy).

This is a pretty high-level view, but it shows the basic steps in order to create a large failover cluster with the potential to run many virtualised workloads. The basic principles are there and the solution can be scaled down or out as required to meet the needs of a particular organisation.

The MCS Talks series is still running (and there are additional resources to compliment the first session on infrastructure architecture). I also have some notes from the second session on core infrastructure that are ready to share so, if you’re finding this information useful, make sure you have subscribed to the RSS feed!

Microsoft infrastructure architecture considerations: part 6 (high availability)

In this instalment of the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, I’ll look at some of the architecture considerations relating to providing high availability through redundancy in the infrastructure.

The whole point of high availability is ensuring that there is no single point of failure. In addition to hardware redundancy (RAID on storage, multiple power supplies, redundant NICs, etc.) consideration should be given to operating system or application-level redundancy.

For some applications, redundancy is inherent:

  • Active Directory uses a multiple-master replicated database.
  • Exchange Server 2007 offers various replication options (local, clustered or standby continuous replication).
  • SQL Server 2008 has enhanced database mirroring.

Other applications may be more suited to the provision of redundancy in the infrastructure – either using failover clusters (e.g. for SQL Server 2005, file and print servers, virtualisation hosts, etc.) or with network load balancing (NLB) clusters (e.g. ISA Server, Internet Information Services, Windows SharePoint Services, Office Communications Server, read-only SQL Server, etc.) – in many cases the choice is made by the application vendor as some applications (e.g. ISA Server, SCOM and SCCM) are not cluster-friendly.

Failover clustering (the new name Microsoft cluster services) is greatly improved in Windows Server 2008, with simplified support (no more cluster hardware compatibility list – replaced by a cluster validation tool, although the hardware is still required to be certified for Windows Server 2008), support for more nodes (the maximum is up from 8 to 16), support for multiple-subnet geoclusters and IPv6 as well as new management tools and enhanced security.

In the final post in this series, I’ll take a look at how to build an infrastructure for data centre consolidation.

Microsoft infrastructure architecture considerations: part 5 (security)

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the infrastructure architecture considerations relating to security.

The main security challenges which organisations are facing today include: management of access rights; provisioning and de-provisioning (with various groups of users – internal, partners and external); protecting the network boundaries (as there is a greater level of collaboration between organisations); and controlling access to confidential data.

Most organisations today need some level of integration with partners and the traditional approach has been one of:

  • NT Trusts (rarely used externally) – not granular enough.
  • Shadow accounts with matching usernames and passwords – difficult to administer.
  • Proxy accounts shared by multiple users – with no accountability and a consequential lack of security.

Federated rights management is a key piece of the “cloud computing” model and allows for two organisations to trust one another (cf. an NT trust) but without the associated overheads – and with some granularity. The federated trust is loosely coupled – meaning that there is no need for a direct mapping between users and resources – instead an account federation server exists on one side of the trust and a resource federation server exists on the other.

As information is shared with customers and partners traditional location-based methods of controlling information (firewalls, access control lists and encryption) have become ineffective. Users e-mail documents back and forth, paper copies are created as documents are printed, online data storage has become available and portable data storage devices have become less expensive and more common with increasing capacities. This makes it difficult to set a consistent policy for information management and then to manage and audit access. It’s almost inevitable that there will be some information loss or leakage.

(Digital) rights management is one solution – most people are familiar with DRM on music and video files from the Internet and the same principles may be applied to IT infrastructure. Making use of 128-bit encryption together with policies for access and usage rights, rights management provides persistent protection to control access across the information lifecycle. Policies are embedded within the document (e.g. for the ability to print, view, edit, or forward a document – or even for it’s expiration) and access is only provided to trusted identities. It seems strange to me that we are all so used to the protection of assets with perceived worth to consumers but that commercial and government documentation is so often left unsecured.

Of course, security should be all-pervasive, and this post has just scratched the surface looking at a couple of challenges faces by organisations as the network boundaries are eroded by increased collaboration. In the next post of this series, I’ll take a look at some of the infrastructure architecture considerations for providing high availability solutions.

Microsoft infrastructure architecture considerations: part 4 (virtualisation)

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the architectural considerations for using virtualisation technologies.

Virtualisation is a huge discussion point right now but before rushing into a virtualisation solution it’s important to understand what the business problem is that needs to be solved.

If an organisation is looking to reduce data centre hosting costs through a reduction in the overall heat and power requirements then virtualisation may help – but if they want to run applications that rely on legacy operating system releases (like Windows NT 4.0) then the real problem is one of support – the operating system (and probably the application to) are unsupported, regardless of whether or not you can physically touch the underlying platform!

Even if virtualisation does look like it could be a solution (or part of one), it’s important to consider management – I often come up against a situation whereby, for operational reasons, virtualisation is more expensive because the operations teams see the host machine (even it if it just a hypervisor) as an extra server else that needs to administered. That’s a rather primitive way to look at things, but there is a real issue there – management is the most important consideration in any virtualisation solution.

Microsoft believes that it has a strong set of products when it comes to virtualisation, splitting the technologies out as server, desktop, application and presentation virtualisation, all managed with products under the System Center brand.

Microsoft view of virtualisation

Perhaps the area where Microsoft is weakest at the moment (relying on partners like Citrix and Quest to provide a desktop broker service) is desktop virtualisation. Having said that, it’s worth considering the market for a virtualised desktop infrastructure – with notebook PC sales outstripping demand for desktops it could be viewed as a niche market. This is further complicated by the various alternatives to a virtual desktop running on a server somewhere: remote boot of a disk-less PC from a SAN; blade PCs (with an RDP connection from a thin client); or a server-based desktop (e.g. using presentation virtualisation).

Presentation virtualisation is also a niche technology as it failed to oust so called “thick client” technologies from the infrastructure. Even so it’s not uncommon (think of it as a large niche – if that’s not an oxymoron!) and it works particularly well in situations where there is a large volume of data that needs to be accessed in a central database as the remote desktop client is local to the data – rather than to the (possibly remote) user. This separation of the running application from the point of control allows for centralised data storage and a lower cost of management for applications (including session brokering capabilities) and, using new features in Windows Server 2008 (or with third party products on older releases of Windows Server), this may further enhanced with the ability to provide gateways for RPC/HTTPS access (including a brokering capability) (avoiding the need for a full VPN solution) and web access/RemoteApp sessions (terminal server sessions which appear as locally-running applications).

The main problem with presentation virtualisation is incompatibility between applications, or between the desktop operating system and an application (which, for many, is the main barrier to Windows Vista deployment) – that’s where application virtualisation may help. Microsoft Application Virtualization (App-V – formerly known as SoftGrid) attempts to solve this issue of application to application incompatibility as well as aiding application deployment (with no requirement to test for application conflicts). To do this, App-V virtualises the application configuration (removing it from the operating system) and each application runs in it’s own runtime environment with complete isolation. This means that applications can run on clients without being “installed” (so it’s easy to remove unused applications) and allows administration from a central location.

The latest version of App-V is available for a full infrastructure (Microsoft System Center Application Virtualization Management Server), a lightweight infrastructure (Microsoft System Center Application Virtualization Streaming Server) or in MSI-delivered mode (Microsoft System Center Application Virtualization Standalone Mode).

Finally host (or server) virtualisation – the most common form of virtualisation but still deployed on only a fraction of the world’s servers – although there are few organisations that would not virtualise at least a part of their infrastructure, given a green-field scenario.

The main problems which host virtualisation can address are:

  • Optimising server investments by consolidating roles (driving up utilisation).
  • Business continuity management – a whole server becomes a few files, making it highly portable (albeit introducing security and management issues to resolve).
  • Dynamic data centre.
  • Test and development.

Most 64-bit versions of Windows Server have enterprise-ready virtualisation built in (in the shape of Hyper-V) and competitor solutions are available for a 32-bit environment (although most hardware purchased in recent years is 64-bit capable and has the necessary processor support). Windows NT is not supported on Hyper-V; however – so if there are legacy NT-based systems to virtualise, then Virtual Server 2005 R2 my be a more appropriate technology selection (NT 4.0 is still out of support, but at least it it is a scenario that has been tested by Microsoft).

In the next post in this series, I’ll take a look at some of the infrastructure architecture considerations relating to for security.

Microsoft infrastructure architecture considerations: part 3 (controlling network access)

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post, I’ll look at some of the considerations for controlling access to the network.

Although network access control (NAC) has been around for a few years now, Microsoft’s network access protection (NAP) is new in Windows Server 2008 (previous quarantine controls were limited to VPN connections).

It’s important to understand that NAC/NAP are not security solutions but are concerned with network health – assessing an endpoint and comparing its state with a defined policy, then removing access for non-compliant devices until they have been remediated (i.e. until the policy has been enforced).

The real question as to whether to implement NAC/NAP is whether or not non-compliance represents a business problem.

Assuming that NAP is to be implemented, then there may be different policies required for different groups of users – for example internal staff, contractors and visitors – and each of these might require a different level of enforcement; however, if the the policy is to be applied, enforcement options are:

  • DHCP – easy to implement but also easy to avoid by using a static IP address. It’s also necessary to consider the healthcheck frequency as it relates to the DHCP lease renewal time.
  • VPN – more secure but relies on the Windows Server 2008 RRAS VPN so may require a third party VPN solution to be replaced. In any case, full-VPN access is counter to industry trends as alternative solutions are increasing used.
  • 802.1x – requires a complex design to support all types of network user and not all switches support dynamic VLANs.
  • IPSec – the recommended solution – built into Windows, works with any switch, router or access point, provides strong authentication and (optionally) encryption. In addition, unhealthy clients are truly isolated (i.e. not just placed in a VLAN with other clients to potentially affect or be affected by other machines). The downside is that NAP enforcement with IPSec requires computers to be domain joined (so will not help with visitors or contractors PCs) and is fairly complex from an operational perspective, requiring implementation of the health registration authority (HRA) role and a PKI solution.

In the next post in these series, I’ll take a look at some of the architectural considerations for using virtualisation technologies within the infrastructure.