Providing fast mailbox access to Exchange Online in virtualised desktop scenarios

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In last week’s post that provided a logical view on end user computing (EUC) architecture, I mentioned two sets of challenges that I commonly see with customers:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

What I didn’t say, is that I’m seeing a lot of Microsoft customers who have a combination of these and who are refreshing parts of their EUC provisioning without looking at the whole picture – for example, moving email from Exchange to Exchange Online but not adopting other Office 365 workloads and not updating their Office client applications (most notably Outlook).

In the last month, I’ve seen at least three organisations who have:

  • An investment in non-persistent virtualised desktops (using technology products from Citrix and others).
  • A stated objective to move email to Exchange Online.
  • Office Enterprise E3 or higher subscriptions (i.e. the licences for Office 365 ProPlus – for subscription-based evergreen Office clients) but no immediate intention to update Office from current levels (typically Office 2010).

These organisations are, in my opinion, making life unnecessarily difficult for themselves.

The technical challenges with such as solution come down to some basic facts:

  • If you move your email to the cloud, it’s further away in network terms. You will introduce latency.
  • Microsoft and Citrix both recommend caching Exchange mailbox data in Outlook.
  • Office 365 is designed to work with recent (2013 and 2016) versions of Office products. Previous versions may work, but with reduced functionality. For example, Outlook 2013 and later have the ability to control the amount of data cached locally – Outlook 2010 does not.

Citrix’s advice (in the Citrix Deployment Guide for Microsoft Office 365 for Citrix XenApp and XenDesktop 7.x) is using Outlook Cached Exchange Mode; however, they also state “For XenApp or non-persistent VDI models the Cached Exchange Mode .OST file is best located on an SMB file share within the XenApp local network”. My experience suggests that, where Citrix customers do not use Outlook Cached Exchange Mode, they will have a poor user experience connecting to mailboxes.

Often, a migration to Office 365  (e.g. to make use of cloud services for email, collaboration, etc.) is best combined with Office application updates. Whilst Outlook 2013 and later versions can control the amount of data that is cached, in a virtualised environment, this represents a user experience trade-off between reducing login times and reducing the impact of slow network access to the mailbox.

Put simply: you can’t have fast mailbox access to Exchange Online without caching on virtualised desktops, unless you want to add another layer of software complexity.

So, where does that leave customers who are unable or unwilling to follow Microsoft’s and Citrix’s advice? Effectively, there are two alternative approaches that may be considered:

  • The use of Outlook on the Web to access mailboxes using a browser. The latest versions of Outlook on the Web (formerly known as Outlook Web Access) are extremely well-featured and many users find that they are able to use the browser client to meet their requirements.
  • Third party solutions, such as those from FSLogix can be used to create “profile containers” for user data, such as cached mailbox data.

Using faster (SSD) disks for XenApp servers and improving the speed of the network connection (including the Internet connection) may also help but these are likely to be expensive options.

Alternatively, take a look at the bigger picture – go back to basics and look at how best to provide business users with a more flexible approach to end user computing.

Short takes: Flexible working and data protection for mobile devices

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It’s been another busy week and I’m still struggling to get a meaningful volume of blog posts online so here are the highlights from a couple of online events I attended recently…

Work smarter, not harder… the art of flexible working

Citrix Online has been running a series of webcasts to promote its Go To Meeting platform and I’ve attended a few of them recently. The others have been oriented towards presenting but, this week, Lynne Copp from the Work Life Company (@worklifecompany) was talking about embracing flexible working. As someone who has worked primarily from home for a number of years now, it would have been great for me to get a bit more advice on how to achieve a better work/life balance (it was touched upon, but most of the session seemed to be targeted how organisations need to change to embrace flexible working practices) but some interesting resources have been made available including:

Extending enterprise data protection to mobile devices

Yesterday, I joined an IDC/Autonomy event looking at the impact of mobile devices on enterprise data protection.

IDC’s Carla Arend (@carla_arend) spoke about how IDC sees four forces of IT industry transformation in cloud, mobility, big data/analytics and social business. I was going to say “they forgot consumerisation” but then it was mentioned as an overarching topic. I was certainly surprised that the term used to describe the ease of use that many consumer services provide was that we have been “spoiled” but the principle that enterprise IT often lags behind is certainly valid!

Critically the “four forces of IT industry transformation” are being driven by business initiatives – and IT departments need to support those requirements. The view put forward was that IT organisations that embrace these initiatives will be able to get funding; whilst those who still take a technology-centric view will be forced to continue down the line of doing more with less (which seems increasingly unsustainable to me…).

This shift has implications for data management and protection – managing data on premise and in the cloud, archiving data generated outside the organisation (e.g. in social media, or other external forums), managing data on mobile devices, and deciding what to do with big data (store it all, or just some of the results?)

Looking at BYOD (which is inevitable for most organisations, with or without the CIO’s blessing!) there are concerns about: who manages the device; who protects it (IDC spoke about backup/archive but I would add encryption too); what happens to data when a device is lost/stolen, or when the device is otherwise replaced; and how can organisations ensure compliance on unmanaged devices?

Meanwhile, organisational application usage is moving outside traditional office applications too, with office apps, enterprise apps, and web apps running on increasing numbers of devices and new machine (sensor) and social media data sets being added to the mix (often originating outside the organisation). Data volumes create challenges too, as well as the variety of locations from which that data originates or resides. This leads to a requirement to carefully consider which data needs to be retained and which may be deleted.

Cloud services can provide some answers and many organisations expect to increasingly adopt cloud services for storage – whether that is to support increasing volumes of application data, or for PC backups. IDC is predicting that the next cloud wave will be around the protection of smart mobile devices.

There’s more detail in IDC’s survey results (European Software Survey 2012, European Storage Survey 2011) but I’ve certainly given the tl;dr view here…

Unfortunately I didn’t stick around for the Autonomy section… it may have been good but the first few minutes were feeling too much like a product pitch to me (and to my colleague who was also online)… sometimes I want the views, opinions and strategic view – thought leadership, rather than sales – and I did say it’s been a busy week!

Improving presentation content and style

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago now, I attended a webcast as part of a series run by Citrix to promote GoToMeeting. Rather than saying “hey, look at our product it does x, y and z”, Citrix used the product to host others giving advice on presentation techniques.

Ironically, some of the presentations were awful – I dropped out of one on “presenting with impact” as (in my opinion) it lacked any kind of impact; suffered from poor audio on the main presenter’s line; started out with irrelevant facts (which were later suggested as an approach to make a session memorable); and then launched into a poll before even getting going.  I did later return to the recording of this session and  around presenting with passion, preparation, body language and the need for practice/preparation – but the fact remains that it initially turned me right off… (and I was surprised to find a professional communicator who hadn’t seen Prezi – although I’m not really a fan of that tool). Sadly, those first impressions stuck with me for the same presenter’s follow-up session on “communicating with impact”, which really failed to keep my attention (if I was being charitable, I might say that was perhaps as much an indictment of the delivery method as of the content).

Thankfully, another webcast was more useful – Roger Courville from 1080 Group spoke about “eight things we can do to improve virtual presentations” and, by and large, they were good tips (for face to face presentations too) so I’m sharing my notes here:

  1. Put a complete idea in the slide title – and keep slides visual for “picture superiority” (although the brain does see a few words as an image).
  2. Create a sense of presence – paint a vision to demonstrate 1:1 attention/facilitate a common connection.
  3. Draw the audience’s eyes to your slide’s main point – direct attention visually (additive) or reduce and simplify (subtractive) – make sure the audience doesn’t have to guess what the main point is!
  4. Keep “wholes” whole… and then build it out if you want to – i.e. show the big picture and then drill down into details.
  5. Analyse (who, what, where, when, why and how), synthesise (action or relationship), visualise (consider how things might look visually or spatially). It’s possible to get a tutorial (and template) for this tip by subscribing to the 1080 Group newsletter.
  6. Pause for power – in advance of a key point for a sense of anticipation or afterwards to allow the brain time to process. Pause for effect and pause for interactions. And, to add some insight from the communicating with impact presentation, allow silence, to give time to digest information and to add gravitas.
  7. Ask your audience what or when is best. Improve things based on feedback – either “on the fly” during the presentation, or by building an understanding over time.  And, although this wasn’t one of Roger’s tips, it seems like a good point to take another cue from the communicating with impact presentation: consider the audience’s DNA (demographic, needs, and attitude) – and be ready to flex your style.
  8. Start you next presentation by “storyboarding” (see the comments on “beyond bullet points” below) – think about the flow of the presentation (content), before filling in details (think how PowerPoint leads us to step straight in and start creating bullets!) – and “design in” interactions (demo, poll, etc.).  By way of illustration, Roger also referred to Cliff Atkinson’s Beyond Bullet Points book which I confess I haven’t read but is structured around three core themes:
    1. Tell a story – you only have a few seconds to create an emotional impact.
    2. Distil your ideas – instead of throwing everything into the presentation, go into the minds of the audience and figure out what to communicate (with an effective structure).
    3. Create visual prompts – not just pretty slides but building out the storyboard to take the audience on a journey to an effective presentation.

Two more points I picked up that I though were worthy of note:

  • The average soundbite dropped from 43 seconds in 1968 to less than 8 in 2010 – reflecting our reduction in attention span?
  • Slides don’t equal duration – more slides do not (necessarily) equal more content. [I particularly subscribe to this one!]

And a quote:

“The act of organising information is in itself an insight” [Edward Tufte]

Even the presentations I were less enamoured with presented some insight, like:

  • Get someone you trust to review your presentation style – and let them be frank, to tell you about your style, impact, and use of filler words like um and err (which come across as lacking in confidence).
  • Direct conversations with open and closed questions, together with summaries (for example, “let me just check…”, list key points and end with a closed question).
  • Online presentations lack feedback from listening noises (like those that might be applied on the phone).

Using this information, I’m hoping to improve my future presentations and, judging by the number of “death by PowerPoint” sessions that I attend, a few other people could learn from this too. There are also a few more resources that might come in handy:

Hyper-V R2 service pack 1, Dynamic Memory, RemoteFX and virtual desktops

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I have to admit that I’ve tuned out a bit on the virtualisation front over the last year.  It seems that some vendors are ramming VDI down our throats as the answer to everything; meanwhile others are confusing virtualisation with “the cloud”.  I’m also doing less hands-on work with technology these days too and I struggle to make a business case to fly over to Redmond for the MVP Summit so I was glad when I was invited to join a call and take a look at some of the improvements Microsoft has made in Hyper-V as part of Windows Server 2008 R2 service pack 1.

Dynamic memory

There was a time when VMware criticised Microsoft for not having any Live Migration capabilities in Hyper-V but we’ve had them for a while now (since Windows Server 2008 R2).  Then there’s the whole device drivers in the hypervisor vs. drivers in the parent partition argument (I prefer hardware flexibility, even if there is the occasional bit of troubleshooting required, over a monolithic hypervisor and locked-down hardware compatibility list).  More recently the criticism has been directed at dynamic memory and I have to admit Microsoft didn’t help themselves with this either: first it was in the product, then it was out; and some evangelists and Product Managers said dynamic memory allocation was A Bad Thing:

“Sadly, another “me too” feature (dynamic memory) has definitely been dropped from the R2 release. I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.”

In case you’re wondering, at my notes from when this feature was dropped from Hyper-V in the R2 release candidate (it was previously demonstrated in the beta). Now that Microsoft has dynamic memory working it’s apparently A Good Thing (Microsoft’s PR works like that – bad when Microsoft doesn’t have it, right up to the point when they do…).

To be fair, it turns out Microsoft’s dynamic memory is not the same as VMware’s – it’s all about over-subscription vs. over commitment. Whereas VMware will overcommit memory and then de-duplicate to reclaim what it needs, Microsoft takes the approach of only providing each VM with enough memory to start up, monitoring performance and adding memory as required, and taking it back when applications are closed.

As for those consolidation ratio improvements: Michael Kleef, one of Microsoft’s Technical Program Managers in the Server and Cloud Division has found that dynamic memory can deliver a 40% improvement in VDI density (Michael also spoke about this at TechEd Europe last year).  Microsoft’s tests were conducted using the Login Virtual Session Indexer (LoginVSI) tool which is designed to script virtual workloads and is used by many vendors to test virtualised infrastructure.

It turns out that, when implementing VDI solutions, disk I/O is the first problem, memory comes next, and only after that is fixed will you hit a processor bottleneck. Instead of allocating 1GB of RAM for each Windows 7 VM, Microsoft used dynamic memory with a 512MB VM (which is supported on Hyper-V).  There’s no need to wait for an algorithm to compute where memory can be reclaimed – instead the minimum requirement is provided, and additional memory is allocated on demand – and Microsoft claims that other solutions rely on weakened operating system security to get to this level of density.  There’s no need to tweak the hypervisor either.

Microsoft’s tests were conducted using HP and Dell servers with 96GB of RAM (the sweet spot above which larger DIMMS are required and so the infrastructure cost rises significantly).  Using Dell’s reference architecture for Hyper-V R2, Microsoft managed to run the same workload on just 8 blades (instead of 12) using service pack 1 and dynamic memory, without ever exhausting server capacity or hitting the limits of unacceptable response times.

Dynamic memory reclamation uses Hyper-V/Windows’ ability to hot-add/remove memory with the system constantly monitoring itself for virtual machines under memory pressure (expanding using the configured memory buffer) or with excess memory, after which they become candidates to remove memory (not immediately in case the user restarts an application).  Whilst it’s particularly useful in a VDI scenario, Microsoft say it also works well with web workloads and server operating systems, delivering a 25-50% density improvement.

More Windows 7 VMs per logical CPU

Dynamic memory is just one of the new virtualisation features in Windows Server 2008 R2 service pack 1.  Another is a new support limit of 12 VMs per logical processor for exclusively Windows 7 workloads (it remains at 8 for other workloads).  And Windows 7 service pack 1 includes the necessary client side components to take advantage of the server-side improvements.

RemoteFX

The other major improvement in Windows Server 2008 R2 service pack 1 is RemoteFX.  This is a server-side graphics acceleration technology.  Due to improvements in the Remote Desktop (RDP) protocol, now at version 7.1, Microsoft is able to provide a more efficient encode/decode pipeline, together with enhanced USB redirection including support for phones, audio, webcams, etc. – all inside an RDP session.

Most of the RemoteFX benefits apply to VDI scenarios but one part also benefits session virtualisation (previously known as Terminal Services) – that’s the RDP encode/decode pipeline which Microsoft says is a game changer.

Microsoft has always claimed that Hyper-V’s architecture makes it scalable. With no device drivers inside the hypervisor (native device drivers only exist on the parent partition) and a VMBus used for communications between virtual machines and the parent partition.  Using this approach, virtual machines can now use a virtual GPU driver to provide the Direct3D or DirectX capabilities that are required for some modern applications – e.g. certain Silverlight or Internet Explorer 9 features.  Using the GPU installed in the server, RemoteFX allows VMs to request content via the virtual GPU and the VMBus, render using the physical GPU and pass the results back to the VM again.

The new RemoteFX encode/decode pipeline uses a render, capture and compress (RCC) process to render on the GPU but to encode the protocol using either the GPU, CPU or an application-specific integrated circiut (ASIC).  Using an ASIC is analogous to TCP offloading in that there is no work required by the CPU.  There’s also a decode ASIC – so clients can use RDP 7.1 in an ultra-thin client package (a solid state ASIC) with RemoteFX decoding.

Summary

Windows 7 and Windows Server 2008 R2 service pack is mostly a rollup of hotfixes but it also delivers some major virtualisation improvements that should help Microsoft to establish itself as a credible competitor in the VDI space. Of course, the hypervisor is just one part of a complex infrastructure and Microsoft still relies on partners to provide parts of the solution – but by using products like Citrix Xen Desktop as a session broker, and tools from Appsense for user state virtualisation, it’s finally possible to deliver a credible VDI solution on the Microsoft stack.

Desktop virtualisation shake-up at Microsoft

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

What an afternoon… for a few days now, I’ve been trying to find out what’s the big announcement from Microsoft and Citrix in the desktop virtualisation hour webcast later today. I was keeping quiet until after the webcast but now the embargo has lifted, Microsoft has issued a press release, and the news is all over the web:

  • Coming in Windows Server 2008 R2 service pack 1 (for which there is no date announced, yet) will be the dynamic memory functionality that was in early releases of Hyper-V for Windows Server 2008 R2 but was later pulled from the product.  Also in SP1 will be a new graphics acceleration platform, known as RemoteFX, that is based on desktop-remoting technology that Microsoft obtained in 2008 when it acquired Calista Technologies, allowing for rich media content to be accessed over the remote desktop protocol, enabling users of virtual desktops and applications to receive a rich 3-D, multimedia experience while accessing information remotely..
  • Microsoft and Citrix are offering a “Rescue for VMware VDI” promotion, which allows VMware View customers to trade in up to 500 licenses at no additional cost, and the “VDI Kick Start” promotion, which offers new customers a more than 50 percent discount off the estimated retail price.
  • There are virtualisation licensing changes too: from July, Windows Client Software Assurance customers will no longer have to buy a separate license to access their Windows operating system in a VDI environment, as virtual desktop access rights now will be a Software Assurance (SA) benefit – effectively, if you have SA, you get Windows on screen, no matter what processor it is running on!  There will also be new roaming usage rights and Windows Client Software Assurance and new Virtual Desktop Access (the new name for VECD) customers will have the right to access their virtual Windows desktop and their Microsoft Office applications hosted on VDI technology on secondary, non-corporate network devices, such as home PCs and kiosks.
  • Citrix will ensure that XenDesktop HDX technology will be interoperable with and will extend RemoteFX within 6 months.
  • Oh yes, and Windows XP Mode (i.e. Windows Virtual PC) will no longer requires hardware virtualisation technology (although, frankly, I find that piece of news a little less exciting as I’d really like to see Virtual PC replaced by a client-side hypervisor).

Looking for a 64-bit notebook to run a type 1 hypervisor (Hyper-V… or maybe XenClient?)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier today, some contacted me via this website and asked for advice on specifying a 64-bit laptop to run Hyper-V. He was confused about which laptops would actually work and I gave the following advice (unfortunately, my e-mail was rejected by his mail server)…

“The main thing to watch out for is the processor specification. If you get the model number (e.g. T7500) and look this up on the Intel website you can see if it has a feature called EM64T or Intel 64 – that is Intel’s implementation of AMD’s x64 technology – and most PCs have it today. Other things you will need for Hyper-V are hardware DEP (Intel XD or AMD NX) and hardware assisted virtualisation (Intel-VT or AMD-V). This last one might catch you out – some quad core chips don’t have the necessary functionality but most dual core chips do (and I’ve heard some reports from people where the chip supports it but there is no option to enable it in the BIOS).

Also, if you’re running 64-bit, driver support can be a pain. Stick with the major manufacturers (Lenovo, Dell, HP) and you should be OK. I was able to get all the drivers I needed for my Fujitsu notebook too.”

If you want to run Hyper-V on a notebook, it’s worth considering that notebook PCs typically have pretty slow hard drives and that can hit performance hard (notebook PCs are not designed to run as servers). Despite feedback indicating that Virtual PC does not provide all the answers, Microsoft doesn’t yet have a decent client-side virtualisation solution for developers, tech enthusiasts and other power users but Citrix have announced something that does look interesting – the XenClient (part of what they call Project Independence), described as:

“[…] a strategic product initiative with partners like Intel, focused on local virtual desktops. We are working together to deliver on our combined vision for the future of desktop computing. This new virtualization solution will extend the benefits of hosted desktop virtualization to millions of mobile workers with the introduction of a new client-side bare metal hypervisor that runs directly on each end user’s laptop or PC.”

You can read more at virtualization.info – and it’s probably worth watching the last 15 minutes from the Synergy day 2 keynote (thanks to Garry Martin for alerting me to this).

Layered on top of XenClient are the management tools to allow organisations to ensure that the corporate desktop remains secure, whilst a personal desktop is open and the scenario where we no longer have a corporate notebook PC (and instead are given an allowance to procure and provide our own IT for work and personal use) suddenly seems a lot more credible. I’m certainly hoping to take a closer look at the XenClient, once I can work out how to get hold of it.

Building a branch office in a box?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For many organisations, branch offices are critical to business and often, rather than being a remote backwater, they represent the point of delivery for business. Meanwhile, organisations want to spend less on IT – and, as IT hardware and software prices fall, providing local resources improves performance for end-users. That sounds great until considering that local IT provision escalates support and administration costs so it makes more financial sense to deliver centralised services (which have a consequential effect on performance and availability). These conflicting business drivers create a real problem for organisations with a large number of branch offices.

For the last few weeks, I’ve been looking at a branch office consolidation exercise at a global organisation who seem to be suffering from server proliferation. One of the potential solutions for consolidation is using Windows Server 2008 and Hyper-V to provide a virtualised infrastructure – a “branch office in a box”, as Gartner described it in a research note from a few years ago [Gartner RAS Core Research Note G00131307, Joe Skorupa, 14 December 2005]. Windows Server 2008 licensing arrangements for virtualisation allow a server to run up to 4 virtualised operating system environments (with enterprise edition) or a single virtual and a single physical instance (with standard edition). It’s also possible to separate domain-level administration (local domain controllers, etc.) from local applications and infrastructure services (file, print, etc.) but such a solution doesn’t completely resolve the issue of maintaining a branch infrastructure.

Any consolidation at the branch level is a good thing but there’s still the issue of wide area network connectivity which means that, for each branch office, not only are there one or more Windows servers (with a number of virtualised workloads) to consider but also potentially some WAN optimisation hardware (e.g. a Cisco WAAS or a Riverbed Steelhead product).

Whilst I was researching the feasibility of such as solution, I came across a couple of alternative products from Cisco and Citrix which include Microsoft’s technology – and this post attempts to provide a high level overview of each of them (bear in mind I’m a Windows guy and I’m coming at this from the Windows perspective rather than from a deep networking point of view).

Cisco and Microsoft Windows Server on WAAS

When I found the Windows Server on WAAS website I thought this sounded like the answer to my problem – Windows Server running on a WAN optimisation appliance – the best of both worlds from two of the industry’s largest names, who may compete in some areas but still have an alliance partnership. In a video produced as part of the joint Cisco and Microsoft announcement of the Windows on WAAS solution, Cisco’s Vice President Marketing for Enterprise Solutions, Paul McNab, claims that this solution allows key Windows services to be placed locally at a reduced cost whilst providing increased flexibility for IT service provision; whilst Microsoft’s Bill Hilf, General Manager for Windows Server marketing and platform strategy, outlines how the branch office market is growing as workforces become more distributed and that the Windows on WAAS solution combines Windows Server IT services with Cisco WAAS’ WAN optimisation, reducing costs relating to infrastructure management and power usage whilst improving the user experience as services are brought closer to the user.

It all sounds good – so how does this solution work?

  • Windows on WAAS is an appliance-based solution which uses virtualisation technologies for Cisco WAAS and Microsoft Windows Server 2008 to run on a shared platform, combined with the advantages of rapid device provisioning. Whilst virtualisation in the datacentre has allowed consolidation, at the branch level the benefit is potentially the ability to reconfigure hardware without a refresh or even a visit from a technician.
  • Windows Server 2008 is used in server core installation mode to provide a reduced Windows Server footprint, with increased security and fewer patches to apply, whilst taking advantage of other Windows Server 2008 enhancements, such as improved SMB performance, a new TCP/IP stack, and read-only domain controllers for increased directory security at the branch.
  • On the WAAS side, Cisco cite improved application performance for TCP-based applications – typically 3-10 times better (and sometimes considerably more) as well as WAN bandwidth usage reduction and the ability to prioritise traffic.
  • Meanwhile, running services such as logon and printing locally means that end user productivity is increased.

Unfortunately, as I began to dig a little deeper (including a really interesting call with one of Cisco’s datacentre product specialists), it seems that this solution is constrained in a number of ways and so might not allow the complete eradication of Windows Server at the branch office.

Firstly, this is not a full Windows Server 2008 server core solution – only four roles are supported: Active Directory Domain Services; DHCP server; DNS server and Print services. Other services are neither supported, nor recommended – and the hardware specifications for the appliances are more akin to PCs (single PSU, etc.) than to servers.

It’s also two distinct solutions – Windows runs in a (KVM) virtual machine to provide local services to the branch and WAAS handles the network acceleration side of things – greatly improved with the v4.1 software release.

On the face of it (and remember I’m a Windows guy) the network acceleration sounds good – with three main methods employed:

  1. Improve native TCP performance (which Microsoft claim Windows Server 2008 does already) by quickly moving to a larger TCP window size and then lessening the flow once it reaches the point of data loss.
  2. Generic caching and compression.
  3. Application-specific acceleration for HTTP, MAPI, CIFS and NFS (but no native packet shaping capability).

All of this comes without the need to make any modifications to the existing network – no tunnelling and no TCP header changes – so the existing quality of service (QoS) and network security policies in place are unaffected by the intervening network acceleration (as long as there’s not another network provider between the branch and the hub with conflicting priorities).

From a support perspective Windows on WAAS is included in the SVVP (so is supported by Microsoft) but KVM will be a new technology for many organisations and there’s also a potential management issue as it’s my understanding that Cisco’s virtual blade technology (e.g. Windows on WAAS) does not yet support centralised management or third party management solutions.

Windows on WAAS is not inexpensive either (around $6,500 list price for a basic WAAS solution, plus another $2,000 for Windows on WAAS, and a further $1,500 if you buy the Windows licenses from Cisco). Add in the cost of the hardware – and the Cisco support from year 2 onwards – and you could buy (and maintain) quite a few Windows Servers in the branch. Of course this is not about cheap access to Windows services – the potential benefits of this solution are much broader – but it’s worth noting that if the network is controlled by a third party then WAN optimisation may not be practical either (for the reasons I alluded to above – if their WAN optimisation/prioritisation conflicts with yours, the net result is unlikely to result in improved performance).

As for competitive solutions, Cisco don’t even regard Citrix (more on them in a moment) as a serious player – from the Cisco perspective the main competition is Riverbed. I didn’t examine Riverbed’s appliances in this study because I was looking for solutions which supported native Windows services (Riverbed’s main focus is wide area application services and their wide area file services are not developed, supported or licensed by Microsoft, so will make uncomfortable bedfellows for many Windows administrators).

When I pressed Cisco for comment on Citrix’s solution, they made the point that WAN optimisation is not yet a mature market and it currently has half a dozen or more vendors competing whilst history from in other markets (e.g. SAN fabrics) would suggest that there will be a lot of consolidation before these solutions reach maturity (i.e. expect some vendors to fall by the wayside).

Citrix Branch Repeater/WANScaler

The Citrix Branch Repeater looks at the branch office problem from a different perspective – and, not surprisingly, that perspective is server-based computing, pairing with Citrix WANScaler in the datacentre. Originally based around Linux, Citrix now offer Branch Repeaters based on Windows Server.

When I spoke to one of Citrix’s product specialists in the UK, he explained to me that the WANScaler technologies used by the Branch Repeater include:

  1. Transparency – the header is left in place so there are no third-party network changes and there is no need to change QoS policies, firewall rules, etc.
  2. Flow control – similar to the Cisco WAAS algorithm (although, somewhat predictably, Citrix claim that their solution is slightly better than Cisco’s).
  3. Application support for CIFS, MAPI, TCP and, uniquely, ICA.

Whereas Cisco advocate turning off the ICA compression in order to compress at the TCP level, ICA is Citrix’s own protocol and they are able to use channel optimisation techniques to provide QoS on particular channels (ICA supports 32 channels in its client-server communications – e.g. mouse, keyboard, screen refresh, etc.) so that, for example, printing can be allowed to take a few seconds to cross the network but mouse, keyboard and screen updates must be maintained in near-real time. In the future, Citrix intend to extend this with cross-session ICA compression in order to use the binary history to reduce the volume of data transferred.

The Linux and Windows-based WANScalers are interoperable and, at the branch end, Citrix offers client software that mimics an appliance (e.g. for home-based workers) or various sizes of Branch Repeater with differing throughput capabilities running a complete Windows Server 2003 installation (not 2008) with the option of a built-in Microsoft ISA Server 2006 firewall and web caching server.

When I asked Citrix who they see as competition, they highlighted that one two companies have licensed Windows for use in an appliance (Citrix and Cisco) – so it seems that Citrix see Cisco as the competition in the branch office server/WAN optimisation appliance market – even if Cisco are not bothered about Citrix!

Summary

There is no clear “one size fits all” solution here and the Cisco Windows on WAAS and Citrix WANScaler solutions each provide significant benefits, albeit with a cost attached. When choosing a solution, it’s also important to consider the network traffic profile – including the protocols in use. The two vendors each come from a slightly different direction: in the case of Cisco this is clearly a piece of networking hardware and software which happens to run a version of Windows; and, for Citrix, the ability to manipulate ICA traffic for server-based computing scenarios is their strength.

In some cases neither the Cisco nor the Citrix solution will be cost effective and, if a third party manages the network, they may not even be able to provide any WAN optimisation benefits. This is why, in my customer scenario, the recommendation was to investigate the use of virtualisation to consolidate various physical servers onto a single Windows Server 2008 “branch office in a box”.

Finally, if such a project is still a little way off, then it may be worth taking a look the branch cache technology which is expected to be included within Windows Server 2008 R2. I’ll follow up with more information on this technology later.