Architecture for the Microsoft platform: defining standards, principles, reference architecture and patterns

BlueprintIT architecture is a funny old game… you see, no-one does it the same way. Sure, we have frameworks and there’s a lot of discussion about “how” to “architect” (is that even a verb?) but there is no defined process that I’m aware of and that’s broadly adopted.

A few years ago, whilst working for a large systems integrator, I was responsible for parts of a technology standardisation programme that was intended to use architecture to drive consistency in the definition, design and delivery of solutions. We had a complicated system of offerings, a technology strategy, policies, architectural principles, a taxonomy, patterns, architecture advice notes, “best practice”, and a governance process with committees. It will probably come as no surprise that there was a fair amount of politics involved – some “not invented here” and some skunkworks projects with divisions defining their own approach because the one from our CTO Office “ivory tower” didn’t fit well.

I’m not writing this to bad-mouth a previous employer – that would be terribly bad form – but I honestly don’t believe that the scenario I’ve described would be significantly different in any large organisation. Politics is a fact of life when working in a large enterprise (and some smaller ones too!). And what we created was, at its heart, sound. I might have preferred a different technical solution to manage it (rather than a clunky portfolio application based on SharePoint lists and workflow) but I still think the principles were solid.

Fast-forward to 2016 and I’m working in a much smaller but rapidly-growing company and I’m, once again, trying to drive standardisation in our solutions (working with my peers in the Architecture Practice). This time I’m taking a much more lightweight approach and, I hope, bringing key stakeholders in our business on the journey too.

We have:

  • Standards: levels of quality or attainment used as a measure or model. These are what we consider as “normal”.
  • Principles: fundamental truths or propositions that serve as a foundation for a system or behaviour. These are the rules when designing or architecting a system – our commandments.

We’ve kept these simple – there are a handful of standards and around a dozen principles – but they seem to be serving us well so far.

Then, there’s our reference architecture. The team has defined three levels:

  • An overall reference model that provides a high level structure with domains around which we can build a set of architecture patterns.
  • The technical architecture – with an “architecture pattern” per domain. At this point, the patterns are still technology-agnostic – for example a domain called “Datacentre Services” might include “Compute”, “Storage”, “Location”, “Scalability” and so on. Although our business is purely built around the Microsoft platform, any number of products could theoretically be aligned to what is really a taxonomy of solution components – the core building blocks for our solutions.
  • “Design patterns” – this is where products come into play, describing the approach we take to implementing each component, with details of what it is, why it would be used, some examples, one or more diagrams with a pattern for implementing the solution component and some descriptive text including details such as dependencies, options and lifecycle considerations. These patterns adhere to our architectural standards and principles, bringing the whole thing full-circle.

It’s fair to say that what we’ve done so far is about technology solutions – there’s still more to be done to include business processes and on towards Enterprise Architecture but we’re heading in the right direction.

I can’t blog the details here – this is my personal blog and our reference architecture is confidential – but I’m pleased with what we’ve created. Defining all of the design patterns is laborious but will be worthwhile. The next stage is to make sure that all of the consulting teams are on board and aligned (during which I’m sure there will be some changes made to reflect the views of the guys who live and breathe technology every day – rather than just “arm waving” and “colouring in” as I do!) – but I’m determined to make this work in a collaborative manner.

Our work will never be complete – there’s a balance to strike between “standardisation” and “innovation” (an often mis-used word, hence the quotation marks). Patterns don’t have to be static – and we have to drive forward and adopt new technologies as they come on stream – not allowing ourselves to stagnate in the comfort of “but that’s how we’ve always done it”. Nevertheless, I’m sure that this approach has merit – if only through reduced risk and improved consistency of delivery.

Image credit: Blueprint, by Will Scullin on Flickr (used under a Creative Commons Attribution 2.0 Generic licence).

Technology standardisation – creating consistency in solution architecture

One of the things about my current role is that I can’t really blog much about what I do at work – it’s mostly not suitable for sharing outside the company.  That’s why I was pleased when my manager suggested I create a white paper outlining the technology standardisation approach that Fujitsu takes in the UK and Ireland. That is, in a nutshell, what I’ve been working to establish for the last year.

The problem is that, without careful control, the inherent complexity of integrating products and services from a variety of sources can be challenging and costly. Solution architects and designers are trained to create innovative solutions to problems but, all too often, those solutions involve bespoke elements or unproven technologies that increase risk and drive up the cost of delivery. At the same time, there are pressures to reduce costs whilst maintaining business benefit – pressures that run completely contrary to the idea of bespoke systems designed to meet each and every customer’s needs. Part of the answer lies in standardisation – or, as I like to think of it, creating consistency in solution architecture.

My technology standardisation paper was published last month and can be found on the Fujitsu UK website.

I’ll be moving on to something new in a short while (watch this space and I’m sure I’ll be able to talk about it soon), so it’s great to look back and say “that’s what I’ve been doing”.

Where’s the line between [IT] architecture and design?

This week, I’m attending a training course on the architecture and design of distributed enterprise systems and yesterday morning, somewhat mischievously, I asked the course instructor where he draws the distinction between architecture and design.

We explored an analogy based about a traditional (building) architect in which I suggested an architect knows the methods and materials to use but would not actually carry out the construction work. But the instructor, Selvyn Wright, made an interesting point – if we admire a building for it’s stunning architecture, often we’re talking about the design. Instead, he suggested that architecture is a style or philosophy whereas design is about the detail.

In the UK IT industry, the last 10 years or so have seen a trend towards “architect” job roles in business and IT. Maybe this is because the UK is one of the few countries where to be a great engineer is discouraged and technical skills are undervalued, rather than being held up as a worthy profession.

In reality, there is a grey line between an architect and a designer and I had frequent conversations with my previous manager on the topic (usually whilst discussing my own development on the path towards enterprise architecture – another commonly mis-used term, probably best saved for another post). Regardless, architecture and design do get mixed (in an IT context) and there comes a point in the transition when one can look back and say “ah yes, now I understand”.

I suggest that point is when someone is able to abstract the logical capabilities of a system from the technology itself. At that point, they’ve probably crossed the line from a designer to an architect.

In other words, an architect can understand the business problem from a logical perspective and create one or more possible solutions. The required capabilities are the logical model and the physical elements are those which need be bought, built or reused to match those capabilities. Architecture is about recognising and employing patterns.

Having decided what an architect is, we come to the issue of design. Indeed, design is a word more often used in a creative sense – a website designer, for example, has a distinct set of skills that are very different to those of a website developer. Ditto for a user interface designer or for a user experience designer. Within the context of the architect vs. designer debate, however, a designer can be viewed as someone whose task it is to work out how to configure the physical elements of the solution and create designs for elements of software applications, or of IT infrastructure.

Architect or designer, either way, I still struggle with the term “architected”. Designed seems to be a more grammatically correct use of English (adding further complexity to the design vs. architecture debate) but it seems increasingly common to architect (as a verb) these days…

Microsoft infrastructure architecture considerations: part 7 (data centre consolidation)

Over the last few days, I’ve written a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series. Just to summarise, the posts so far have been:

  1. Introduction.
  2. Remote offices.
  3. Controlling network access.
  4. Virtualisation.
  5. Security.
  6. High availability.

In this final infrastructure architecture post, I’ll outline the steps involved in building an infrastructure for data centre consolidation. In this example the infrastructure is a large cluster of servers to run a virtual infrastructure; however many of the considerations would be the same for a non-clustered or a physical solution:

  1. The first step is to choose build a balanced system – whilst it’s inevitable that there will be a bottleneck at some point in the architecture, by designing a balanced system the workloads can be mixed to even out the overall demand – at least, that’s the intention. Using commodity hardware it should be possible to provide a balance between cost and performance using the following configuration for the physical cluster nodes:
    • 4-way quad core (16 core in total) Intel Xeon or AMD Opteron-based server (with Intel VT/AMD-V and NX/XD processor support).
    • 2GB RAM per processor core minimum (4GB per core recommended).
    • 4Gbps Fibre Channel storage solution.
    • Gigabit Ethernet NIC (onboard) for virtual machine management and migration/cluster heartbeat.
    • Quad-port gigabit Ethernet PCI Express NIC for virtual machine access to the network.
    • Windows Server 2008 x64 enterprise or datacenter edition (server core installation).
  2. Ensure that Active Directory is available (at least one physical DC is required in order to get the virtualised infrastructure up and running).
  3. Build the physical servers that will provide the virtualisation farm (16 servers).
  4. Configure the SAN storage.
  5. Provision the physical servers using System Center Configuration Manager (again, a physical server will be required until the cluster is operational) – the servers should be configured as a 14 active/2 passive node failover cluster.
  6. Configure System Center Virtual Machine Manager for virtual machine provisioning, including the necessary PowerShell scripts and the virtual machine repository.
  7. Configure System Center Operations Manager (for health monitoring – both physical and virtual).
  8. Configure System Center Data Protection Manager for virtual machine snapshots (i.e. use snapshots for backup).
  9. Replicate snapshots to another site within the SAN infrastructure (i.e. provide location redundancy).

This is a pretty high-level view, but it shows the basic steps in order to create a large failover cluster with the potential to run many virtualised workloads. The basic principles are there and the solution can be scaled down or out as required to meet the needs of a particular organisation.

The MCS Talks series is still running (and there are additional resources to compliment the first session on infrastructure architecture). I also have some notes from the second session on core infrastructure that are ready to share so, if you’re finding this information useful, make sure you have subscribed to the RSS feed!

Microsoft infrastructure architecture considerations: part 6 (high availability)

In this instalment of the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, I’ll look at some of the architecture considerations relating to providing high availability through redundancy in the infrastructure.

The whole point of high availability is ensuring that there is no single point of failure. In addition to hardware redundancy (RAID on storage, multiple power supplies, redundant NICs, etc.) consideration should be given to operating system or application-level redundancy.

For some applications, redundancy is inherent:

  • Active Directory uses a multiple-master replicated database.
  • Exchange Server 2007 offers various replication options (local, clustered or standby continuous replication).
  • SQL Server 2008 has enhanced database mirroring.

Other applications may be more suited to the provision of redundancy in the infrastructure – either using failover clusters (e.g. for SQL Server 2005, file and print servers, virtualisation hosts, etc.) or with network load balancing (NLB) clusters (e.g. ISA Server, Internet Information Services, Windows SharePoint Services, Office Communications Server, read-only SQL Server, etc.) – in many cases the choice is made by the application vendor as some applications (e.g. ISA Server, SCOM and SCCM) are not cluster-friendly.

Failover clustering (the new name Microsoft cluster services) is greatly improved in Windows Server 2008, with simplified support (no more cluster hardware compatibility list – replaced by a cluster validation tool, although the hardware is still required to be certified for Windows Server 2008), support for more nodes (the maximum is up from 8 to 16), support for multiple-subnet geoclusters and IPv6 as well as new management tools and enhanced security.

In the final post in this series, I’ll take a look at how to build an infrastructure for data centre consolidation.

Microsoft infrastructure architecture considerations: part 5 (security)

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the infrastructure architecture considerations relating to security.

The main security challenges which organisations are facing today include: management of access rights; provisioning and de-provisioning (with various groups of users – internal, partners and external); protecting the network boundaries (as there is a greater level of collaboration between organisations); and controlling access to confidential data.

Most organisations today need some level of integration with partners and the traditional approach has been one of:

  • NT Trusts (rarely used externally) – not granular enough.
  • Shadow accounts with matching usernames and passwords – difficult to administer.
  • Proxy accounts shared by multiple users – with no accountability and a consequential lack of security.

Federated rights management is a key piece of the “cloud computing” model and allows for two organisations to trust one another (cf. an NT trust) but without the associated overheads – and with some granularity. The federated trust is loosely coupled – meaning that there is no need for a direct mapping between users and resources – instead an account federation server exists on one side of the trust and a resource federation server exists on the other.

As information is shared with customers and partners traditional location-based methods of controlling information (firewalls, access control lists and encryption) have become ineffective. Users e-mail documents back and forth, paper copies are created as documents are printed, online data storage has become available and portable data storage devices have become less expensive and more common with increasing capacities. This makes it difficult to set a consistent policy for information management and then to manage and audit access. It’s almost inevitable that there will be some information loss or leakage.

(Digital) rights management is one solution – most people are familiar with DRM on music and video files from the Internet and the same principles may be applied to IT infrastructure. Making use of 128-bit encryption together with policies for access and usage rights, rights management provides persistent protection to control access across the information lifecycle. Policies are embedded within the document (e.g. for the ability to print, view, edit, or forward a document – or even for it’s expiration) and access is only provided to trusted identities. It seems strange to me that we are all so used to the protection of assets with perceived worth to consumers but that commercial and government documentation is so often left unsecured.

Of course, security should be all-pervasive, and this post has just scratched the surface looking at a couple of challenges faces by organisations as the network boundaries are eroded by increased collaboration. In the next post of this series, I’ll take a look at some of the infrastructure architecture considerations for providing high availability solutions.

Microsoft infrastructure architecture considerations: part 4 (virtualisation)

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the architectural considerations for using virtualisation technologies.

Virtualisation is a huge discussion point right now but before rushing into a virtualisation solution it’s important to understand what the business problem is that needs to be solved.

If an organisation is looking to reduce data centre hosting costs through a reduction in the overall heat and power requirements then virtualisation may help – but if they want to run applications that rely on legacy operating system releases (like Windows NT 4.0) then the real problem is one of support – the operating system (and probably the application to) are unsupported, regardless of whether or not you can physically touch the underlying platform!

Even if virtualisation does look like it could be a solution (or part of one), it’s important to consider management – I often come up against a situation whereby, for operational reasons, virtualisation is more expensive because the operations teams see the host machine (even it if it just a hypervisor) as an extra server else that needs to administered. That’s a rather primitive way to look at things, but there is a real issue there – management is the most important consideration in any virtualisation solution.

Microsoft believes that it has a strong set of products when it comes to virtualisation, splitting the technologies out as server, desktop, application and presentation virtualisation, all managed with products under the System Center brand.

Microsoft view of virtualisation

Perhaps the area where Microsoft is weakest at the moment (relying on partners like Citrix and Quest to provide a desktop broker service) is desktop virtualisation. Having said that, it’s worth considering the market for a virtualised desktop infrastructure – with notebook PC sales outstripping demand for desktops it could be viewed as a niche market. This is further complicated by the various alternatives to a virtual desktop running on a server somewhere: remote boot of a disk-less PC from a SAN; blade PCs (with an RDP connection from a thin client); or a server-based desktop (e.g. using presentation virtualisation).

Presentation virtualisation is also a niche technology as it failed to oust so called “thick client” technologies from the infrastructure. Even so it’s not uncommon (think of it as a large niche – if that’s not an oxymoron!) and it works particularly well in situations where there is a large volume of data that needs to be accessed in a central database as the remote desktop client is local to the data – rather than to the (possibly remote) user. This separation of the running application from the point of control allows for centralised data storage and a lower cost of management for applications (including session brokering capabilities) and, using new features in Windows Server 2008 (or with third party products on older releases of Windows Server), this may further enhanced with the ability to provide gateways for RPC/HTTPS access (including a brokering capability) (avoiding the need for a full VPN solution) and web access/RemoteApp sessions (terminal server sessions which appear as locally-running applications).

The main problem with presentation virtualisation is incompatibility between applications, or between the desktop operating system and an application (which, for many, is the main barrier to Windows Vista deployment) – that’s where application virtualisation may help. Microsoft Application Virtualization (App-V – formerly known as SoftGrid) attempts to solve this issue of application to application incompatibility as well as aiding application deployment (with no requirement to test for application conflicts). To do this, App-V virtualises the application configuration (removing it from the operating system) and each application runs in it’s own runtime environment with complete isolation. This means that applications can run on clients without being “installed” (so it’s easy to remove unused applications) and allows administration from a central location.

The latest version of App-V is available for a full infrastructure (Microsoft System Center Application Virtualization Management Server), a lightweight infrastructure (Microsoft System Center Application Virtualization Streaming Server) or in MSI-delivered mode (Microsoft System Center Application Virtualization Standalone Mode).

Finally host (or server) virtualisation – the most common form of virtualisation but still deployed on only a fraction of the world’s servers – although there are few organisations that would not virtualise at least a part of their infrastructure, given a green-field scenario.

The main problems which host virtualisation can address are:

  • Optimising server investments by consolidating roles (driving up utilisation).
  • Business continuity management – a whole server becomes a few files, making it highly portable (albeit introducing security and management issues to resolve).
  • Dynamic data centre.
  • Test and development.

Most 64-bit versions of Windows Server have enterprise-ready virtualisation built in (in the shape of Hyper-V) and competitor solutions are available for a 32-bit environment (although most hardware purchased in recent years is 64-bit capable and has the necessary processor support). Windows NT is not supported on Hyper-V; however – so if there are legacy NT-based systems to virtualise, then Virtual Server 2005 R2 my be a more appropriate technology selection (NT 4.0 is still out of support, but at least it it is a scenario that has been tested by Microsoft).

In the next post in this series, I’ll take a look at some of the infrastructure architecture considerations relating to for security.

Microsoft infrastructure architecture considerations: part 3 (controlling network access)

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post, I’ll look at some of the considerations for controlling access to the network.

Although network access control (NAC) has been around for a few years now, Microsoft’s network access protection (NAP) is new in Windows Server 2008 (previous quarantine controls were limited to VPN connections).

It’s important to understand that NAC/NAP are not security solutions but are concerned with network health – assessing an endpoint and comparing its state with a defined policy, then removing access for non-compliant devices until they have been remediated (i.e. until the policy has been enforced).

The real question as to whether to implement NAC/NAP is whether or not non-compliance represents a business problem.

Assuming that NAP is to be implemented, then there may be different policies required for different groups of users – for example internal staff, contractors and visitors – and each of these might require a different level of enforcement; however, if the the policy is to be applied, enforcement options are:

  • DHCP – easy to implement but also easy to avoid by using a static IP address. It’s also necessary to consider the healthcheck frequency as it relates to the DHCP lease renewal time.
  • VPN – more secure but relies on the Windows Server 2008 RRAS VPN so may require a third party VPN solution to be replaced. In any case, full-VPN access is counter to industry trends as alternative solutions are increasing used.
  • 802.1x – requires a complex design to support all types of network user and not all switches support dynamic VLANs.
  • IPSec – the recommended solution – built into Windows, works with any switch, router or access point, provides strong authentication and (optionally) encryption. In addition, unhealthy clients are truly isolated (i.e. not just placed in a VLAN with other clients to potentially affect or be affected by other machines). The downside is that NAP enforcement with IPSec requires computers to be domain joined (so will not help with visitors or contractors PCs) and is fairly complex from an operational perspective, requiring implementation of the health registration authority (HRA) role and a PKI solution.

In the next post in these series, I’ll take a look at some of the architectural considerations for using virtualisation technologies within the infrastructure.

Microsoft infrastructure architecture considerations: part 2 (remote offices)

Continuing from my earlier post which sets the scene for a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, in this post, I’ll look at some of the considerations for remote offices.

Geographically dispersed organisations face a number of challenges in order to support remote offices including: WAN performance/reliability; provisioning new services/applications/servers; management; remote user support; user experience; data security; space; and cost.

One approach that can help with some (not all) of these concerns is placing a domain controller (DC) in each remote location; but this has been problematic until recently because it increases the overall number of servers (it’s not advisable to co-locate other services on a domain controller because administration can’t be delegated to a local administrator on a domain controller and the number of Domain Admins should be kept to a minimum) and it’s a security risk (physical access to the domain controller computer makes a potential hacker’s job so much simpler). For that reason, Microsoft introduced read only domain controllers (RODCs) in Windows Server 2008.

There are still some considerations as to whether this is the appropriate solution though. Benefits include:

  • Administrative role separation.
  • Faster logon times (improved access to data).
  • Isolated corruption area.
  • Improved security.

whilst other considerations and potential impacts include:

  • The need for a schema update.
  • Careful RODC placement.
  • Impact on directory-enabled applications.
  • Possibility of site topology design changes.

Regardless of whether a remote office DC (either using the RODC capabilities or as a full DC) is deployed, then server sprawl (through the introduction of branch office servers for a variety of purposes) can be combatted with the concept of a branch “appliance” – not in the true sense of a piece of dedicated hardware runnings an operating system and application that is heavily customised to meet the needs of a specific service – but by applying appliance principles to server design and running multiple workloads in a manner that allows for self-management and healing.

The first step is to virtualise the workloads. Hyper-V is built into Windows Server 2008 and the licensing model supports virtualisation at no additional cost. Using the server core installation option, the appliance (physical host) management burden is reduced with a smaller attack surface and reduced patching. Multiple workloads may be consolidated onto a single physical host (increasing utilisation and removing end-of-life hardware) but there are some downsides too:

  • There’s an additional server to manage (the parent/host partition) and child/guest partitions will still require management but tools like System Center Virtual Machine Manager (SCVMM) can assist (particularly when combined with other System Center products).
  • A good business continuity plan is required – the branch office “appliance” becomes a single point of failure and it’s important to minimise the impact of this.
  • IT staff skills need to be updated to manage server core and virtualisation technologies.

So, what about the workloads on the branch office “appliance”? First up is the domain controller role (RODC or full DC) and this can be run as a virtual machine or as an additional role on the host. Which is “best” is entirely down to preference – running the DC alongside Hyper-V on the physical hardware means there is one less virtual machine to manage and operate (multiplied by the number of remote sites) but running it in a VM allows the DC to be “sandboxed”. One important consideration is licensing – if Windows Server 2008 standard edition is in use (which includes one virtual operating system environment, rather than enterprise edition’s four, or datacenter edition’s unlimited virtualisation rights) then running the DC on the host saves a license – and there is still some administrative role separation as the DC and virtualisation host will probably be managed centrally, with a local administrator taking some responsibility for the other workloads (such as file services).

That leads on to a common workload – file services. A local file server offers a good user experience but is often difficult to back up and manage. One solution is to implement DFS-R in a hub and spoke arrangement and to keep the backup responsibility data centre. If the remote file server fails, then replication can be used to restore from a central server. Of course, DFS-R is not always idea for replicating large volumes of data; however the DFS arrangement allows users to view local and remote data as though it were physically stored a single location and there have been a number of improvements in Windows Server 2008 DFS-R (cf. Windows Server 2003 R2). In addition, SMB 2.0 is less “chatty” than previous implementations, allowing for performance benefits when using a Windows Vista client with a Windows Server 2008 server.

Using these methods, it should be possible to avoid remote file server backups and remote DCs should not need to be backed up either (Active Directory is a multi-master replicated database so it has an inherent disaster recovery capability). All that’s required is some method of rebuilding a failed physical server – and the options there will depend on the available bandwidth. My personal preference is to use BITS to ensure that the remote server always holds a copy of the latest build image on a separate disk drive and then to use this to rebuild a failed server with the minimum of administrator intervention or WAN traffic.

In the next post in these series, I’ll take a look at some of the considerations for using network access protection to manage devices that are not compliant with the organisation’s security policies.

Microsoft infrastructure architecture considerations: part 1 (introduction)

Last week, I highlighted the MCS Talks: Enterprise Architecture series of webcasts that Microsoft is running to share the field experience of Microsoft Consulting Services (MCS) in designing and architecting Microsoft-based infrastructure solutions – and yesterday’s post picked up on a key message about software as a service/software plus services from the infrastructure futures section of session 1: infrastructure architecture.

Over the coming days and weeks, I’ll highlight some of the key messages from the rest of the first session, looking at some of the architectural considerations around:

  • Remote offices.
  • Controlling network access.
  • Virtualisation.
  • Security.
  • High availability.
  • Data centre consolidation.

Whilst much of the information will be from the MCS Talks, I’ll also include some additional information where relevant, but, before diving into the details, it’s worth noting that products rarely solve problems. Sure enough, buying a software tool may fix one problem, but it generally adds to the complexity of the infrastructure and in that way does not get to the root issue. Infrastrcture optimisation (even a self assessment) can help to move IT conversations to a business level as well as allowing the individual tasks that are required to reach meet the overall objectives to be prioritised.

Even though the overall strategy needs to be based on business considerations, there are still architectural considerations to take into account when designing the technical solution and, even though this series of blog posts refers to Microsoft products, there is no reason (architecturally) why alternatives should not be considered.