Short takes: refreshing all the fields in a Word document; fixing the spacing after a table in Word

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

More snippets of info from the last few weeks… this time with a focus on Word…

Refreshing all the fields in a Word 2013 document

I was writing a pretty sizable document recently, with many tens of tables, a few figures and lots of cross references so I wanted to be able to easily update all the fields in one fell swoop. Well, it turns out to be remarkable easy to do, if not immediately obvious, in Word 2013 (and it seems it works for older versions too). Just go to Print Preview and the fields will be updated!  You’ll still need to manually update tables of contents, etc. if you’ve added/removed sections, but all the other fields in the document will be taken care of.

Fixing the spacing after a table in Word

Another challenge I had with my document was that it included a lot of tables, and after each table the following line was too close.  If I included a blank line, it was too big (and anyway, that’s not the right answer); and if I edited the Normal style then it would affect the rest of the document.

I found some suggestions in a post from Allen Wyatt. The first was to amend the table positioning and set top and bottom spacing but that involves letting text flow around the table (and potentially tables floating off around the document in the same way as so many pictures do…). The simpler approach was to create a new style, based on Normal, called After Table, which has the appropriate paragraph spacing set. No more ghastly gaps and dodgy new lines – instead I just use the After Table style on the paragraph immediately after each table.

An approach to enabling Office 365 features and functionality using group membership

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For large enterprises with a mature approach to IT services, the idea of managing access to features and functionality in Office 365 via a web portal is a step backwards. Service desk teams may be given specific instructions and limited access in order to carry out just the tasks that they need to. Arguably that’s not “may be given” but “should only be given”…

One of my customers uses Active Directory groups to assign access to software – for example Project, or Visio – applications that are not universally available. We were talking about doing something similar for Office 365 features and functionality – i.e. adding a user to an Active Directory group to enable an element of their Office 365 subscription (the users are synchronised from the on premises AD to Azure AD).

I suggested writing a PowerShell script to run as a scheduled task, querying the membership of a particular group, and then making the changes in Office 365 to enable particular features. We could use it, for example, to enable a feature like OneDrive for Business to just a sub-set of users; or to assign Project Online or Visio Online licenses.

Well, it turns out I’m no innovator here and it’s already being done elsewhere – Office 365 MVP Johan Dahlbom has published his script at the 365 lab.  I haven’t run the script yet… but it certainly proves the concept and gives us a starting point…

In which geographical region is my Office 365 tenant hosted?

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I wrote about some considerations for naming an Office 365 tenant and I mentioned that the name was the second of two important things to think about.

For many customers in Europe, the question of where in the world their Office 365 tenant is homed is crucial. Without going into the whys and wherefores (which are too big a can of worms for this blog post) us Europeans generally need our data to be in European datacentres (by law).

The region in which the tenant is created is set when you sign up for Office 365, by picking the country associated with your account. At sign-up it says that the country is locked to determine:

  • The services you can use.
  • The billing currency.
  • The closest datacentre.

Actually, that’s not quite the whole story: the services available can be set at user level (according to their location); and the closest datacentre is actually based on DNS, routing to the closest datacentre, and then across Microsoft’s network to the final destination (at least for Exchange Online).

There are also some services (notably Yammer) for which there is no hosting outside the United States.

But what if you didn’t create the tenant? In many large organisations someone may already have created a companyname.onmicrosoft.com (where companyname is the tenant name) and, as the tenant name can’t be changed either, you need to be sure that it is suitable for use rather than just starting over again.

Checking where your tenant is hosted

I spent some time looking at ways to see where a given tenant is hosted and here are a few methods I found.

In PowerShell (after remoting to Exchange Online) and using Get-OrganizationalUnit and Get-OrganizationConfig I found:

  • The OrganizationalUnit was listed as eurpr02a001.prod.outlook.com/Microsoft Exchange Hosted Organizations/markwilson.onmicrosoft.com
  • The OrganizationId was EURPR02A001.prod.outlook.com/Microsoft Exchange Hosted Organizations/markwilson.onmicrosoft.com – EURPR02A001.prod.outlook.com/ConfigurationUnits/markwilson.onmicrosoft.com/Configuration
  • The DistinguishedName was CN=Configuration,CN=markwilson.onmicrosoft.com,CN=ConfigurationUnits,DC=EURPR02A001,DC=prod,DC=outlook,DC=com
  • The ObjectCategory was EURPR02A001.prod.outlook.com/Configuration/Schema/ms-Exch-Configuration-Unit-Container
  • The OriginatingServer was AMSPR02A001DC01.EURPR02A001.prod.outlook.com

I don’t know Microsoft’s naming standards but I’d be willing to place a small bet that EUR is Europe and AMS is Amsterdam.

Looking at the message headers on an email received I saw it passed through various servers until ultimately it got to AMSPR02MB246.eurprd02.prod.outlook.com and DB3PR02MB252.eurprd02.prod.outlook.com (mail servers in Amsterdam and Dublin? Certainly in Europe?

Also, Get-MsolCompanyInformation tells me that the CountryLetterCode is GB (Great Britain):

This is also visible in the Office 365 Admin Center under the company profile (where GB has been translated to United Kingdom… which is not the same as Great Britain but is close enough in this case).

With a combination of the above, I think I can be pretty sure that my tenant is in Europe!

Further information

There’s some interesting reading on the Microsoft Online Services: Where is my data? page, including links to data maps (like this one for Europe).

Take care when naming your Office 365 tenant

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

When signing up for Office 365 there are two really important decisions:

  1. Where will my tenant be homed (based on the region selected at sign-up)?
  2. What will my tenant be called?

The reason these are so important is because, once set, they cannot be changed.

I’ll write more about the home location of the tenant in a future post (why it matters in some ways, and why it doesn’t in others) but, for now, let’s look at the tenant name.

To explain what this is, when you sign up for Office 365, a new tenant is created and given a name in the form tenantname.onmicrosoft.com.

By default, users log on with username@tenantname.onmicrosoft.com and that becomes their email address too. Other domain names can be added to the tenant (I have markwilson.onmicrosoft.com but I’ve also associated the markwilson.co.uk, markwilson.it, wilsonfamily.org.uk and several other domain names with the tenant) after which the user name can be changed accordingly, as well as the email addresses. The initial tenantname.onmicrosoft.com name can’t be removed though.

Indeed, the only time you’ll see the tenant name is in the URIs for SharePoint Online and OneDrive for Business which are tenantname.sharepoint.com and tenantname-my.sharepoint.com respectively.

Pick your name carefully. What seems a good idea today may not seem so good further down the road. I recently worked with a customer who now regrets the NT domain name used for their Active Directory being named after the town where they are based but it’s too risky to change it now. Similarly your Office 365 tenant name (which is actually your Microsoft Online Services tenant name) needs to be chosen with care. Personally, I’d avoid putting 365 in it as the scope is potentially much broader. companyname.onmicrosoft.com is probably about the safest bet, at least until your company merges with another company! Or you could go with something completely ambiguous like ABC123…

Further reading

About your initial onmicrosoft.com domain in Office 365.

Turning to a PAL when troubleshooting Exchange…

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently found myself working with a customer to troubleshoot issues with their Exchange Servers. The servers were losing contact with each other periodically and then doing what Exchange is designed to do in this circumstance, failing databases over to other servers. Unfortunately, whilst I’m not going to go into the exact details here for reasons of confidentiality, it did completely underline my opinion (backed up by Microsoft’s Preferred Architecture for Exchange) that virtualised Exchange solutions using SAN storage are overly-complex and will cause issues at some point…

After going through the basics (are all the servers patched up to date, including consistent BIOS and firmware for all disks, controllers, NICs, etc.; has anti-virus software been disabled on the servers, especially host intrusion protection features; is the virtual infrastructure correctly configured, especially virtual networks; are there any tell-tale issues in the event logs?), I contacted my former colleague, Fujitsu Distinguished Engineer and Exchange Master, Nick Parlow (who fixes people’s broken Exchange solutions for a living).  Nick gave me some very useful advice for troubleshooting Exchange that I’m going to share here…

A few more points that might be useful:

Mobile broadband: not just about where people live but about where people go

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As Britain enters the traditional school summer holiday season, hundreds of thousands of people will travel to the coast, to our national parks, to beautiful areas of the countryside. After the inevitable traffic chaos will come the cries from teenagers who can’t get on the Internet. And from one or two adults too.

This summer, my holidays will be in Swanage, where I can get a decent 3G signal (probably 4G too – I haven’t been down for a while) but, outside our towns and cities, the likelihood of getting a mobile broadband connection (if indeed any broadband connection) is pretty slim.  I’m not going to get started on the rural broadband/fibre to the home or cabinet discussion but mobile broadband is supposed to fill the gaps. Unfortunately that’s a long way from reality.

Rural idyll and technological isolation

Last half-term, my family stayed in the South Hams, in Devon. It’s a beautiful part of the country where even A roads are barely wide enough for buses and small trucks to traverse and the pace of life is delightfully laid back.  Our holiday home didn’t have broadband, but we had three smartphones with us – one on Giffgaff (O2), one on Vodafone, one on EE. None could pick up any more than a GPRS signal.

In the past, it’s been good for me to disconnect from the ‘net on holiday, to switch off from social media, to escape from email. This time though, I noticed a change in the way that tourist attractions are marketed. The leaflets and pamphlets no longer provide all the information you need to book a trip and access to a website is assumed. After all, this is 2015 not 1975. Whilst planning a circular tour from Dartmouth by boat to Totnes, bus to Paignton and steam train back to Kingswear (a short ferry ride from our origin at Dartmouth) I was directed to use the website to check which services to use (as the river is tidal and the times can vary accordingly) but I couldn’t use the ‘net – I had no connection.

To find the information I needed, I used another function on my phone – making a telephone call – whilst for other online requirements I drove to Dartmouth or Kingsbridge. I even picked up a 4G connection in Kingsbridge, downloading my podcasts in seconds – what a contrast just 8 miles (and a 45 minute round trip!) makes.

A new communications role for the village pub

Public houses have always been an important link in rural connectivity (physically, geographically, socially) and Wi-Fi is now providing a technological angle to the pub’s role in the community. From my perspective, a pint whilst perusing the ‘net is not a bad thing. Both the village pubs in Slapton had Wi-Fi (I’ll admit standing outside one of them before opening hours to get on the Internet one day!) and whilst visiting Hope Cove I was borrowed pub Wi-Fi to tweet to Joe Baguley, who I knew visits often (by chance, he was there too!).

Indeed, it was a tweet from Joe, spotted when I got home, that inspired me to write this post:

No better in the Home Counties

It’s not just on holiday though… I live in a small market town close to where Northamptonshire, Bedfordshire and Buckinghamshire (OK, Milton Keynes) meet. Despite living on a hill, and my house being of 1990s construction (so no thick walls here), EE won’t even let me reliably make a phone call. This is from the network which markets itself as the

“the UK’s biggest, fastest and most reliable mobile network today”

3G is available in parts of town if some microwaves stretch to us from a neighbouring cell but consider that we’re only 58 miles from London. This is not the back of beyond…

Then think about travelling by train. On my commute from Bedford or Milton Keynes to London there is no Wi-Fi, patchy 3G, and it’s impossible to work. The longer-distance journeys I used to make to Manchester were better as I could use the on-train Wi-Fi, but that comes at a cost.

Broadband is part of our national infrastructure, just like telephone networks, roads and railways. Fibre is slowly reaching out and increasing access speeds in people’s homes but mobile broadband is increasingly important in our daily lives. Understandably, the private enterprises that operate the mobile networks focus on the major towns and cities (where most people live). But they also need to think about the devices we use – the mobile devices – and consider how to address requirements outside those cities, in the places where the people go.

Complete Hudl wipe (with reflash) to revert to factory settings

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year, I wrote about removing the Tesco customisations from my Hudl and, for a while, it worked well. Over time I found that the device slowed down and I suspect it may have been infected with some malware (I did use it to visit some of the murkier parts of the Internet, not realising how prone to malware Android is… I now have Malwarebytes installed).

Stuck in a 1.3.1 update loop

I also found I couldn’t install the latest updates that Tesco have provided (sadly not KitKat, but at least some bug fixes) and the device had say on a shelf, unused and unloved, for several months

Following Tesco’s advice, I performed a factory reset via the Android System Recovery (Power and Volume up) but it turns out that only wipes the data/cache, after which I was stuck in a loop whereby whilst running the Hudl’s initial setup it wanted to install the latest update, failed and couldn’t go any further. It was time to go geek and re-flash the device.

(re)Flash!

[Ah ah, saviour of the universe]

I downloaded the Stock Hudl ROM and the Rockchip Batch Tool (RKTools) using Paul O’Brien’s advice on MoDaCo. The next problem was getting the RKTools to recognise the Hudl to reflash it.

My initial attempts were on a Windows 8 machine, then I tried Windows 7 instead (both were x64 versions) – I’m not sure if this made any difference though and it was an Archos tablet forum post that got me moving in the right direction:

  1. Hold the reset and volume up buttons for about 3 seconds (it’s a bit trial and error) – you need a completely black screen – the if battery icon or Android open belly are showing then the device is not in the correct mode.
  2. If RKTools needs drivers, they are supplied with the software. If all is well then you should see a Rockusb device in Device Manager instead of an Android device with a warning triangle:

After this, RKtools could see my Hudl (green icon indicating a connected device) and I restored the stock Tesco firmware, as shown in the next few screenshots:

 

Setup

After re-flashing, running through the setup was just as when the Hudl was new. After connecting to Wi-Fi, signing in to Google, etc. it will look for updates. This time the 1.3.1. update should be successful (the problem with that update is the SystemUI.apk file that was replaced when removing the Tesco customisations).

I might remove the Tesco [T] from the bottom of the screen again one day but for now the device is working more reliably for me with normal apps like Kindle, Facebook, iPlayer, etc. and I only really need something I can watch a bit of telly/social media/read a book on!

Short takes: PowerShell to examine Azure AD tenants; check which Office 365 datacentre you’re using

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

More snippets from my ever-growing browser full of tabs…

PowerShell to get your Azure Active Directory tenant ID

Whilst researching Office 365 tenant names and their significance, I stumbled across some potentially useful PowerShell to read your Azure Active Directory tenant ID.

Get-AzureAccount

I’m not sure how to map that tenant ID (or even the subscription ID, which doesn’t seem to relate to any of my Office 365 subscriptions) to something meaningful… but it might be useful one day…

Script to retrieve basic Office 365 tenant details

I also found a script on the Microsoft TechNet Gallery to retrieve basic details for an Office 365 tenant (using Get-MsolDomain, Get-MSolAccountSku and Get-MSRole).

Check which datacentre you’re connected to in Exchange Online

Office 365 uses some smart DNS tricks to point you at the nearest Microsoft datacentre and then route requests across the Microsoft network. If you want to try this out and see where your requests are heading, just ping outlook.office365.com and see the name of the host that replies:

Getting started with Yammer

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yammer has been around for years but a while back it was purchased by Microsoft. It’s kind of like Facebook, but for businesses, and now that it’s included in certain Office 365 plans, I’m increasingly finding myself talking to customers about it as they look to see what it can offer.

The thing about Yammer though, is that it’s (still) not very tightly integrated with Office 365.  It’s getting closer in that Yammer can be used to replace the SharePoint newsfeed and that you can now log on to Yammer with Office 365 credentials (avoiding the need to have a directory synchronisation connector with an on-premises Active Directory) but it’s still very loosely coupled.

Yammer has a stealth model for building a user base or, as Microsoft puts it, the “unique adoption model” of Yammer allows organisations to become social. Most companies will already be using it in pockets, under the radar of the IT department (or at least without their explicit consent) because all that’s needed to sign up to Yammer is an email address.

As soon as two or more people with the same domain name sign up, you have a network, in Yammer terminology.

Yammer Basic

The free Yammer Basic service allows people to communicate within a network, structured around an activity feed, which is a rich microblog to track what colleagues are doing, get instant feedback on running conversations, share documents and information on projects people are working on. Users can like posts, reply, and use hashtags/topics for social linking, flag a post and point to someone in a reply. They can also create/respond to polls to get ad-hoc opinion on an issue.

Yammer Groups allow for scoped topics of conversation – for example around a project, or a social activity. Users can follow other users or groups to select information that’s interesting to them – and Yammer will suggest people/groups to follow.

Yammer Enterprise

When an organisation is ready to adopt and manage Yammer centrally, they can add IT controls (essentially, bring it under control of an administrator who controls the creation of groups, membership of those groups, and many other settings).  This is done by upgrading to the Yammer Enterprise product, either as a standalone service or, more typically these days, integrated with an Office 365 subscription (typically an enterprise plan, but other plans are available).

In theory, activating Yammer on your Office 365 subscription is a simple step (described by Jethro Segers in his Office 365 tip of the day post). Unfortunately, when I tried with a customer last week it took over an hour (with the page telling me it would take between 1 and 30 minutes, so be patient! It also needs the domain name to be verified in the tenant, which may already be the case for other services, or may require some additional steps. The whole process is described in a Microsoft blog post from the French SharePoint GBS team.

Each Yammer network has its own URI. In the case of a company network it will be yammer.com/companyname, based on the email address used to create the network.  If multiple domain names are in use, they can be linked to the same network but the network will always be private to the company that owns it. Also, I found that one of my customers had two domains registered in Office 365 so we used the one associated with the parent (holding) company for yammer, and until we repeat the process to bring in the other domains, users are authenticating with their @holdingcompanyname.com addresses.

External networks

External networks can be created for collaboration outside the company – e.g. to business partners and I have started to create them with my customers for collaborating around projects, getting them used to using Yammer as we work together on an Office 365 pilot, for example. Access to external networks is by invitation only, but can include users from multiple organisations. Each external network has a parent, which retains overall control.

Wrap-up

In this post, I’ve described the basics of Yammer. I’m sure there are many other elements to consider but this is enough to be getting started with. I’m sure I’ll post some more as I find answers to my customers’ questions over the coming months and some of my colleagues hosted a webcast on Yammer recently, which can be viewed below:

For more information, check out:

Some design principles for Microsoft Exchange

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In a previous role, I managed a team that was responsible for Microsoft Exchange design. Working with Microsoft, we established a set of design principles, some template designs and a rule-book for any changes made to those designs. Soon afterwards, Microsoft published their Preferred Architecture for Exchange and the similarity was striking (to be honest, I’d have been concerned if it was different)!

As that Preferred Architecture is publicly available, I think it’s fine to talk about some of the principles we applied. They seem particularly pertinent now as I was recently working with a customer whose Exchange design doesn’t follow these… and they were experiencing a few difficulties that were affecting the stability of their systems…

Physical, not virtual

Virtualisation has many advantages in many circumstances but it is not a “silver bullet”. It also brings complexity and operational challenges into Exchange design, with few (if any) advantages that would not be already provided by Exchange out of the box. Exchange is designed to make full use of the available hardware and Microsoft is able to provide large, low cost mailboxes within Office 365 (Exchange Online), without a requirement to virtualise their Exchange 2013 platform. In addition to the operational and supportability complexities that virtualisation brings, virtualising the Exchange deployment requires more Exchange design effort.

Deploy multi-role Exchange servers

Microsoft’s current recommended practice is to deploy multi-role Exchange 2013 servers (i.e. client access and mailbox roles on the same server) for the following reasons:

  1. Reduced hardware. Multi-role servers make best use of processor capacity given the more powerful server specifications which are now available.
  2. Reduced operational and capital expenditure. Fewer servers to deploy and manage.
  3. Building block design which is simple to deploy and scale. Automated deployment of standard server builds.

The mailbox server role must be designed not to exceed the maximum processor capacity guidance for multi-role servers; this provides confidence that the hardware deployed can co-host all roles on a single server. This is where the Exchange 2013 Server Role Requirements Calculator comes in…

Use direct attached storage – not a SAN

Microsoft designed Exchange 2013 to run on commodity hardware and believes this is the most cost effective way to provide storage for the Exchange mailbox databases.  Changes to the Exchange 2013 storage engine have yielded a significant reduction in I/O over previous versions of Exchange, allowing customers to take advantage of larger, cheaper disks and reduce the overall solution costs. In general, Direct Attached Storage (DAS) should be used in a Just a Bunch of Discs (JBoD) configuration although there are some circumstances where a Redundant Array of Inexpensive Devices (RAID) configuration may be used.

Microsoft uses a commoditised email platform with DAS and JBoD architecture to provide and support large, low cost mail mailboxes within Office 365. There are many more solution elements to consider with a SAN (Host Bus Adapters (HBAs), fibre channel switches and SAN I/O modules) as well as additional software for managing the infrastructure and firmware to keep up-to-date. Consequently, there is an increased likelihood of technical integration issues using a SAN and, once installed, a SAN infrastructure has to be carefully monitored and managed with appropriately skilled staff. In stark contrast, the costs of direct-attached JBoD solutions is falling as larger disks become available.

Native resilience

Database availability groups (DAGs) were introduced in Exchange Server 2010 to replicate databases between up to 16 servers. A DAG with multiple mailbox database copies, can provide automatic recovery from a variety of server, storage, network and other hardware failures. Auto-reseed functionality in Exchange Server 2013 allows for automatically bringing spare disks on line in the event of failure and creating new database copies.

If four highly available copies of each database are deployed, Exchange native resilience can be used without the requirement for third party backup solutions. Only specific requirements (i.e. ability to recover to an offline datacentre; recovery of deleted mailbox outside the deleted mailbox recovery retention time; protection against operational immaturity; protection against security breaches etc.) drive a requirement for adoption of a third party backup solution

Exchange Online uses the Exchange native resilience to protect against database failures, without resorting to the use of third party backup solutions.

Whilst a DAG can support 16 servers, it may be prudent to artificially limit the number of DAG members (e.g. to 12) in order to provide flexibility in upgrade scenarios.

Site resilience

DAGs can be extended between sites and copies of databases replicated across sites to provide additional redundancy. Each member of the DAG must have a round trip latency no greater than 500ms to contact the other members, regardless of their physical location. In general, Exchange DAGs should span at least two physical sites and Microsoft also recommends that separate Active Directory sites are used.

Mailbox distribution

With multiple sites in use, the next consideration is whether both are active (i.e. providing live service) or whether one is a secondary, passive, datacentre (i.e. invoked for disaster recovery purposes).  If all active mailboxes are hosted in a single site, and all passive copies of the mailboxes reside in a secondary site, the user distribution model is referred to as active/passive. If there are active mailboxes in both primary and secondary datacentres then the user distribution model is known to be active/active.

This should not be confused with the databases within the DAG, where only one copy of each database is active at any time.

Exchange 2013 simplified the client access model and with all clients connecting using HTTPS an active/active architecture is simple and spreads the client load across all Client Access servers, making best use of the deployed hardware.

This also facilitates a simplified SMTP namespace and allows automatic site failure (assuming the File Share Witness is located in a tertiary datacentre).

Archiving

With today’s storage capabilities, large mailboxes are becoming normal.  The use of a native Exchange 2013 archive or a third party archiving solution is only required where there is a defined need for a user experience that warrants the management of email data (the ‘personal archive’ user experience, e.g. auto archive functionality) or by legal/policy requirements regarding the retention and discovery of email data (‘the regulatory archive’).

There is a common misconception that using a third party archive solution will provide a cost effective, single instance storage solution by differentiating ‘hot’ and ‘cold’ data and providing the ability to store ‘colder’ data on cheaper, slower disks. In fact, introducing a secondary system increases costs and complexity (in design and management) as well as reducing the flexibility of the solution.

Many organisations are electing to leave behind their archives with browser-only access as they migrate to larger online mailboxes in the cloud, e.g. using Exchange Online.

Conclusion

Whilst Exchange is supported in a virtualised environment, with SAN-attached storage, third party backup and making use of email archive solutions, deviating from the Preferred Architecture is a huge risk. The points in this blog post, combined with Microsoft’s advice linked above highlight the reasons to keep your Exchange design as simple as possible. Whilst a more complex design will probably work, identifying issues when it doesn’t will be a much bigger challenge.