Mobile broadband: not just about where people live but about where people go

As Britain enters the traditional school summer holiday season, hundreds of thousands of people will travel to the coast, to our national parks, to beautiful areas of the countryside. After the inevitable traffic chaos will come the cries from teenagers who can’t get on the Internet. And from one or two adults too.

This summer, my holidays will be in Swanage, where I can get a decent 3G signal (probably 4G too – I haven’t been down for a while) but, outside our towns and cities, the likelihood of getting a mobile broadband connection (if indeed any broadband connection) is pretty slim.  I’m not going to get started on the rural broadband/fibre to the home or cabinet discussion but mobile broadband is supposed to fill the gaps. Unfortunately that’s a long way from reality.

Rural idyll and technological isolation

Last half-term, my family stayed in the South Hams, in Devon. It’s a beautiful part of the country where even A roads are barely wide enough for buses and small trucks to traverse and the pace of life is delightfully laid back.  Our holiday home didn’t have broadband, but we had three smartphones with us – one on Giffgaff (O2), one on Vodafone, one on EE. None could pick up any more than a GPRS signal.

In the past, it’s been good for me to disconnect from the ‘net on holiday, to switch off from social media, to escape from email. This time though, I noticed a change in the way that tourist attractions are marketed. The leaflets and pamphlets no longer provide all the information you need to book a trip and access to a website is assumed. After all, this is 2015 not 1975. Whilst planning a circular tour from Dartmouth by boat to Totnes, bus to Paignton and steam train back to Kingswear (a short ferry ride from our origin at Dartmouth) I was directed to use the website to check which services to use (as the river is tidal and the times can vary accordingly) but I couldn’t use the ‘net – I had no connection.

To find the information I needed, I used another function on my phone – making a telephone call – whilst for other online requirements I drove to Dartmouth or Kingsbridge. I even picked up a 4G connection in Kingsbridge, downloading my podcasts in seconds – what a contrast just 8 miles (and a 45 minute round trip!) makes.

A new communications role for the village pub

Public houses have always been an important link in rural connectivity (physically, geographically, socially) and Wi-Fi is now providing a technological angle to the pub’s role in the community. From my perspective, a pint whilst perusing the ‘net is not a bad thing. Both the village pubs in Slapton had Wi-Fi (I’ll admit standing outside one of them before opening hours to get on the Internet one day!) and whilst visiting Hope Cove I was borrowed pub Wi-Fi to tweet to Joe Baguley, who I knew visits often (by chance, he was there too!).

Indeed, it was a tweet from Joe, spotted when I got home, that inspired me to write this post:

No better in the Home Counties

It’s not just on holiday though… I live in a small market town close to where Northamptonshire, Bedfordshire and Buckinghamshire (OK, Milton Keynes) meet. Despite living on a hill, and my house being of 1990s construction (so no thick walls here), EE won’t even let me reliably make a phone call. This is from the network which markets itself as the

“the UK’s biggest, fastest and most reliable mobile network today”

3G is available in parts of town if some microwaves stretch to us from a neighbouring cell but consider that we’re only 58 miles from London. This is not the back of beyond…

Then think about travelling by train. On my commute from Bedford or Milton Keynes to London there is no Wi-Fi, patchy 3G, and it’s impossible to work. The longer-distance journeys I used to make to Manchester were better as I could use the on-train Wi-Fi, but that comes at a cost.

Broadband is part of our national infrastructure, just like telephone networks, roads and railways. Fibre is slowly reaching out and increasing access speeds in people’s homes but mobile broadband is increasingly important in our daily lives. Understandably, the private enterprises that operate the mobile networks focus on the major towns and cities (where most people live). But they also need to think about the devices we use – the mobile devices – and consider how to address requirements outside those cities, in the places where the people go.

Complete Hudl wipe (with reflash) to revert to factory settings

Last year, I wrote about removing the Tesco customisations from my Hudl and, for a while, it worked well. Over time I found that the device slowed down and I suspect it may have been infected with some malware (I did use it to visit some of the murkier parts of the Internet, not realising how prone to malware Android is… I now have Malwarebytes installed).

Stuck in a 1.3.1 update loop

I also found I couldn’t install the latest updates that Tesco have provided (sadly not KitKat, but at least some bug fixes) and the device had say on a shelf, unused and unloved, for several months

Following Tesco’s advice, I performed a factory reset via the Android System Recovery (Power and Volume up) but it turns out that only wipes the data/cache, after which I was stuck in a loop whereby whilst running the Hudl’s initial setup it wanted to install the latest update, failed and couldn’t go any further. It was time to go geek and re-flash the device.


[Ah ah, saviour of the universe]

I downloaded the Stock Hudl ROM and the Rockchip Batch Tool (RKTools) using Paul O’Brien’s advice on MoDaCo. The next problem was getting the RKTools to recognise the Hudl to reflash it.

My initial attempts were on a Windows 8 machine, then I tried Windows 7 instead (both were x64 versions) – I’m not sure if this made any difference though and it was an Archos tablet forum post that got me moving in the right direction:

  1. Hold the reset and volume up buttons for about 3 seconds (it’s a bit trial and error) – you need a completely black screen – the if battery icon or Android open belly are showing then the device is not in the correct mode.
  2. If RKTools needs drivers, they are supplied with the software. If all is well then you should see a Rockusb device in Device Manager instead of an Android device with a warning triangle:

After this, RKtools could see my Hudl (green icon indicating a connected device) and I restored the stock Tesco firmware, as shown in the next few screenshots:



After re-flashing, running through the setup was just as when the Hudl was new. After connecting to Wi-Fi, signing in to Google, etc. it will look for updates. This time the 1.3.1. update should be successful (the problem with that update is the SystemUI.apk file that was replaced when removing the Tesco customisations).

I might remove the Tesco [T] from the bottom of the screen again one day but for now the device is working more reliably for me with normal apps like Kindle, Facebook, iPlayer, etc. and I only really need something I can watch a bit of telly/social media/read a book on!

Short takes: PowerShell to examine Azure AD tenants; check which Office 365 datacentre you’re using

More snippets from my ever-growing browser full of tabs…

PowerShell to get your Azure Active Directory tenant ID

Whilst researching Office 365 tenant names and their significance, I stumbled across some potentially useful PowerShell to read your Azure Active Directory tenant ID.


I’m not sure how to map that tenant ID (or even the subscription ID, which doesn’t seem to relate to any of my Office 365 subscriptions) to something meaningful… but it might be useful one day…

Script to retrieve basic Office 365 tenant details

I also found a script on the Microsoft TechNet Gallery to retrieve basic details for an Office 365 tenant (using Get-MsolDomain, Get-MSolAccountSku and Get-MSRole).

Check which datacentre you’re connected to in Exchange Online

Office 365 uses some smart DNS tricks to point you at the nearest Microsoft datacentre and then route requests across the Microsoft network. If you want to try this out and see where your requests are heading, just ping and see the name of the host that replies:

Getting started with Yammer

Yammer has been around for years but a while back it was purchased by Microsoft. It’s kind of like Facebook, but for businesses, and now that it’s included in certain Office 365 plans, I’m increasingly finding myself talking to customers about it as they look to see what it can offer.

The thing about Yammer though, is that it’s (still) not very tightly integrated with Office 365.  It’s getting closer in that Yammer can be used to replace the SharePoint newsfeed and that you can now log on to Yammer with Office 365 credentials (avoiding the need to have a directory synchronisation connector with an on-premises Active Directory) but it’s still very loosely coupled.

Yammer has a stealth model for building a user base or, as Microsoft puts it, the “unique adoption model” of Yammer allows organisations to become social. Most companies will already be using it in pockets, under the radar of the IT department (or at least without their explicit consent) because all that’s needed to sign up to Yammer is an email address.

As soon as two or more people with the same domain name sign up, you have a network, in Yammer terminology.

Yammer Basic

The free Yammer Basic service allows people to communicate within a network, structured around an activity feed, which is a rich microblog to track what colleagues are doing, get instant feedback on running conversations, share documents and information on projects people are working on. Users can like posts, reply, and use hashtags/topics for social linking, flag a post and point to someone in a reply. They can also create/respond to polls to get ad-hoc opinion on an issue.

Yammer Groups allow for scoped topics of conversation – for example around a project, or a social activity. Users can follow other users or groups to select information that’s interesting to them – and Yammer will suggest people/groups to follow.

Yammer Enterprise

When an organisation is ready to adopt and manage Yammer centrally, they can add IT controls (essentially, bring it under control of an administrator who controls the creation of groups, membership of those groups, and many other settings).  This is done by upgrading to the Yammer Enterprise product, either as a standalone service or, more typically these days, integrated with an Office 365 subscription (typically an enterprise plan, but other plans are available).

In theory, activating Yammer on your Office 365 subscription is a simple step (described by Jethro Segers in his Office 365 tip of the day post). Unfortunately, when I tried with a customer last week it took over an hour (with the page telling me it would take between 1 and 30 minutes, so be patient! It also needs the domain name to be verified in the tenant, which may already be the case for other services, or may require some additional steps. The whole process is described in a Microsoft blog post from the French SharePoint GBS team.

Each Yammer network has its own URI. In the case of a company network it will be, based on the email address used to create the network.  If multiple domain names are in use, they can be linked to the same network but the network will always be private to the company that owns it. Also, I found that one of my customers had two domains registered in Office 365 so we used the one associated with the parent (holding) company for yammer, and until we repeat the process to bring in the other domains, users are authenticating with their addresses.

External networks

External networks can be created for collaboration outside the company – e.g. to business partners and I have started to create them with my customers for collaborating around projects, getting them used to using Yammer as we work together on an Office 365 pilot, for example. Access to external networks is by invitation only, but can include users from multiple organisations. Each external network has a parent, which retains overall control.


In this post, I’ve described the basics of Yammer. I’m sure there are many other elements to consider but this is enough to be getting started with. I’m sure I’ll post some more as I find answers to my customers’ questions over the coming months and some of my colleagues hosted a webcast on Yammer recently, which can be viewed below:

For more information, check out:

Some design principles for Microsoft Exchange

In a previous role, I managed a team that was responsible for Microsoft Exchange design. Working with Microsoft, we established a set of design principles, some template designs and a rule-book for any changes made to those designs. Soon afterwards, Microsoft published their Preferred Architecture for Exchange and the similarity was striking (to be honest, I’d have been concerned if it was different)!

As that Preferred Architecture is publicly available, I think it’s fine to talk about some of the principles we applied. They seem particularly pertinent now as I was recently working with a customer whose Exchange design doesn’t follow these… and they were experiencing a few difficulties that were affecting the stability of their systems…

Physical, not virtual

Virtualisation has many advantages in many circumstances but it is not a “silver bullet”. It also brings complexity and operational challenges into Exchange design, with few (if any) advantages that would not be already provided by Exchange out of the box. Exchange is designed to make full use of the available hardware and Microsoft is able to provide large, low cost mailboxes within Office 365 (Exchange Online), without a requirement to virtualise their Exchange 2013 platform. In addition to the operational and supportability complexities that virtualisation brings, virtualising the Exchange deployment requires more Exchange design effort.

Deploy multi-role Exchange servers

Microsoft’s current recommended practice is to deploy multi-role Exchange 2013 servers (i.e. client access and mailbox roles on the same server) for the following reasons:

  1. Reduced hardware. Multi-role servers make best use of processor capacity given the more powerful server specifications which are now available.
  2. Reduced operational and capital expenditure. Fewer servers to deploy and manage.
  3. Building block design which is simple to deploy and scale. Automated deployment of standard server builds.

The mailbox server role must be designed not to exceed the maximum processor capacity guidance for multi-role servers; this provides confidence that the hardware deployed can co-host all roles on a single server. This is where the Exchange 2013 Server Role Requirements Calculator comes in…

Use direct attached storage – not a SAN

Microsoft designed Exchange 2013 to run on commodity hardware and believes this is the most cost effective way to provide storage for the Exchange mailbox databases.  Changes to the Exchange 2013 storage engine have yielded a significant reduction in I/O over previous versions of Exchange, allowing customers to take advantage of larger, cheaper disks and reduce the overall solution costs. In general, Direct Attached Storage (DAS) should be used in a Just a Bunch of Discs (JBoD) configuration although there are some circumstances where a Redundant Array of Inexpensive Devices (RAID) configuration may be used.

Microsoft uses a commoditised email platform with DAS and JBoD architecture to provide and support large, low cost mail mailboxes within Office 365. There are many more solution elements to consider with a SAN (Host Bus Adapters (HBAs), fibre channel switches and SAN I/O modules) as well as additional software for managing the infrastructure and firmware to keep up-to-date. Consequently, there is an increased likelihood of technical integration issues using a SAN and, once installed, a SAN infrastructure has to be carefully monitored and managed with appropriately skilled staff. In stark contrast, the costs of direct-attached JBoD solutions is falling as larger disks become available.

Native resilience

Database availability groups (DAGs) were introduced in Exchange Server 2010 to replicate databases between up to 16 servers. A DAG with multiple mailbox database copies, can provide automatic recovery from a variety of server, storage, network and other hardware failures. Auto-reseed functionality in Exchange Server 2013 allows for automatically bringing spare disks on line in the event of failure and creating new database copies.

If four highly available copies of each database are deployed, Exchange native resilience can be used without the requirement for third party backup solutions. Only specific requirements (i.e. ability to recover to an offline datacentre; recovery of deleted mailbox outside the deleted mailbox recovery retention time; protection against operational immaturity; protection against security breaches etc.) drive a requirement for adoption of a third party backup solution

Exchange Online uses the Exchange native resilience to protect against database failures, without resorting to the use of third party backup solutions.

Whilst a DAG can support 16 servers, it may be prudent to artificially limit the number of DAG members (e.g. to 12) in order to provide flexibility in upgrade scenarios.

Site resilience

DAGs can be extended between sites and copies of databases replicated across sites to provide additional redundancy. Each member of the DAG must have a round trip latency no greater than 500ms to contact the other members, regardless of their physical location. In general, Exchange DAGs should span at least two physical sites and Microsoft also recommends that separate Active Directory sites are used.

Mailbox distribution

With multiple sites in use, the next consideration is whether both are active (i.e. providing live service) or whether one is a secondary, passive, datacentre (i.e. invoked for disaster recovery purposes).  If all active mailboxes are hosted in a single site, and all passive copies of the mailboxes reside in a secondary site, the user distribution model is referred to as active/passive. If there are active mailboxes in both primary and secondary datacentres then the user distribution model is known to be active/active.

This should not be confused with the databases within the DAG, where only one copy of each database is active at any time.

Exchange 2013 simplified the client access model and with all clients connecting using HTTPS an active/active architecture is simple and spreads the client load across all Client Access servers, making best use of the deployed hardware.

This also facilitates a simplified SMTP namespace and allows automatic site failure (assuming the File Share Witness is located in a tertiary datacentre).


With today’s storage capabilities, large mailboxes are becoming normal.  The use of a native Exchange 2013 archive or a third party archiving solution is only required where there is a defined need for a user experience that warrants the management of email data (the ‘personal archive’ user experience, e.g. auto archive functionality) or by legal/policy requirements regarding the retention and discovery of email data (‘the regulatory archive’).

There is a common misconception that using a third party archive solution will provide a cost effective, single instance storage solution by differentiating ‘hot’ and ‘cold’ data and providing the ability to store ‘colder’ data on cheaper, slower disks. In fact, introducing a secondary system increases costs and complexity (in design and management) as well as reducing the flexibility of the solution.

Many organisations are electing to leave behind their archives with browser-only access as they migrate to larger online mailboxes in the cloud, e.g. using Exchange Online.


Whilst Exchange is supported in a virtualised environment, with SAN-attached storage, third party backup and making use of email archive solutions, deviating from the Preferred Architecture is a huge risk. The points in this blog post, combined with Microsoft’s advice linked above highlight the reasons to keep your Exchange design as simple as possible. Whilst a more complex design will probably work, identifying issues when it doesn’t will be a much bigger challenge.


Importing users to Office 365 from CSV file – username must be in UPN format

Every now and again, I come across a piece of advice on the net from seemingly authoritative sources that’s just plain wrong. Or at least it’s factually correct but doesn’t answer the question that was asked.  One such example was a few weeks ago when I was uploading user details via CSV to bulk provision cloud accounts in Office 365.

The import was failing, telling me that “The user name is not valid. You can only use letters and numbers. No spaces”. Except that’s not really the problem here – we were using the CSV template downloaded from the Office 365 Admin Center and there were no letters and spaces.

Stupidly, I’d put in the user names – like MarkWilson – but of course Office 365 usernames are in UPN format.  What the message could (more helpfully) have said is “The user name is not valid. It should be in the format username@fullyqualifieddomainname”.

Unfortunately, there is a “verified answer” on a Microsoft Community forum post that is incorrect. It tells the original poster to download a blank CSV file from the portal and to populate that but that’s exactly what they (and I) did. The correct answer (which is a “suggested answer”, but not a “verified answer”) says to include the @domainname in the user name field in the CSV file. In my example, that would be (assuming no other domain names have been associated with the tenant). So far, my requests for Microsoft to get this fixed have failed… here’s hoping that my blog post comes up in the next person’s Google/Bing search…

Changing the default app used to open tel: links on Windows

Earlier this morning I had a missed call notification in Outlook. I clicked the number, Windows asked me which app I wanted to open that type of link (a tel: URI) and I clicked the wrong option. All of a sudden I had phone numbers opening in the Skype Windows 8 app rather than in my Skype for Business client (previously the Lync client).

It turns out that it’s a relatively simple change to make but it’s not necessarily obvious that the UI to do this is the one to change file type associations (this is a link, not a file…).

  1. In Control Panel go to Default Programs and then Set Default Programs (the quickest way is to hit the Windows key and type “Default Programs“).
  2. Scroll down to Lync (desktop). Despite the name, this is the Skype for Business desktop client.
  3. Select Lync (desktop) and click Chose defaults for this program:
  4. You’ll see that the URL:Tel Protocol entry is not checked, because it’s associated with Skype:
  5. Select the Checkbox next to TEL and click Save:
  6. If you look at the Skype program associations, TEL will now be showing as defaulting to Skype for Business (desktop):

There’s more information in Paul Thurrott’s Windows 8 Tip on Changing File Associations.

Short takes: @ in DNS records; are ‘ and & legal in an email address?; changing the search base for IDfix

A few short items that don’t quite warrant their own blog post…

@ in DNS records

Whilst working with a customer on their Office 365 integration recently, we had a requirement to add various DNS records, including the TXT record for domain verification which included an @ symbol. The DNS provider’s systems didn’t allow us to do this, or to use a space instead to denote the origin of the domain. Try googling for @ and you’ll have some challenges too…

One support call later and we had the answer… use a *.  It seemed to do the trick as soon after that the Microsoft servers were able to recognise our record and we continues with the domain configuration.

Are ‘ and & “legal” in an email address?

Another interesting item that came up was from running the IDfix domain synchronisation error remediation tool to check the on-premises directory before synchronisation.  Some of the objects it flagged as requiring remediation were distribution groups with apostrophes (‘) or ampersands (&) in their SMTP addresses. Fair enough, but that got me wondering how/why those addresses ever worked at all (I once had an argument with someone who alleged that the hyphen in my wife’s domain name was an “illegal” character). Well, it seems that, technically, they are allowable in SMTP (I struggled reading the RFCs, but Wikipedia makes it clearer) but certainly not good practice… and definitely not for synchronisation with Azure AD.

Changing the search base for IDfix

I mentioned the IDfix tool above and, sometimes, running it against a whole domain can be difficult to cope with the results.  As we planned to filter directory synchronisation on certain organizational units (OUs), it made sense to query the domain for issues on those same OUs. This is possible in the settings for IDfix, where the LDAP query for the search base can be changed.

Short takes: missing keys, closing apps and taking screen grabs

Another post with a few things I’ve collected in my browser tabs over the last few weeks…

Locating the hash (#) key on a Mac keyboard

I love the Apple wireless keyboard that I use with my Mac Mini but tweeting without a hash key can be challenging at times…

So much for the Mac’s simplicity when I have to Google to find the hash key (it’s at Alt+3, BTW)!

Closing Windows 8 apps with the Surface/Surface Pro touch/type covers

And, talking of missing keys… the Surface/Surface Pro touch/type covers have function keys that double up as media keys so, if you want to Alt-F4 to close an app, remember that’s Alt+Fn+F4.

Snipping from “Metro” apps in Windows 8.1

If you want to snip a portion of the screen in Windows 8.x and you’re running a full-screen (“Metro”) app, then you’re out of luck – the Snipping Tool only works in desktop mode. The workaround is to take a screenshot with PrtSc and then edit the resulting clipboard contents. Hopefully this gets better in Windows 10?

So where is the PrtSc key for the Surface/Surface Pro touch/type covers?

There isn’t a PrtSc key, but Fn+space will grab the whole screen (as PrtSc does on a normal PC keyboard) and Alt+Fn+space will grab the current window and copy it to the clipboard (as Alt+PrtSc does normally).


The OneDrive that’s really two drives…

Jamie Thomson and I have long since lamented the challenges of Microsoft’s two directories for cloud services and it doesn’t stop there. Take a look at cloud storage:

  • OneDrive is Microsoft’s cloud-based storage offering, accessed with a Microsoft Account (formerly a Windows Live ID, or a Passport if you go back far enough…)
  • OneDrive for Business is Microsoft’s cloud-based storage offering, accessed with an Organizational Account (which lives in Microsoft Azure AD)

Similar names, similar purpose, totally different implementation – as the OneDrive for Business product is still Groove (which later became SharePoint Workspace) under the covers (have a look at the filename when you download the client).

And look what happens when you have both products with the same email address used to access them:

Still, at least the site detects that this has happened and gives you the choice. And there is some hope for future convergence as Jamie highlights in this blog post from earlier in the year.

Earlier this week, I was helping a customer to get ready for an Office 365 pilot and they were having challenges with the OneDrive client. The version available for download from the Office 365 portal is a click-to-run installation and it didn’t want to play nicely with their .MSI-based Office 2013 installation (which should already include the client anyway). Actually, that didn’t really matter because the OneDrive client is also included in Windows 8.1, which was the operating system being used.

The confusion came with setting up the connected services inside Office:

  • To set up a OneDrive account, click on OneDrive – but that will only accept Microsoft Account credentials and, after configuration it will show as something like “OneDrive – Personal”.
  • To set up OneDrive for Business, don’t click OneDrive but select SharePoint instead. After logging on with your Organizational Account credentials, that will be displayed as “OneDrive – organisation name” (with SharePoint sites appearing as “Sites – organisation name”).

Some illustration might help so, below is a shot of my connected services. Because I’m connected to multiple Office 365 tenants, you can see that I have multiple OneDrive [for Business] and Sites entries:

If you’re trying to get hold of the OneDrive for Business sync client for SharePoint 2013 and SharePoint Online, Microsoft knowledge base article 2903984 has the links for the click-to-run install.  If you want an MSI version, then you’re out of luck – but you can create a customised Office 2013 installation instead as OneDrive for Business (formerly SkyDrive Pro) was originally released as part of several Office 2013 suites (as described in Microsoft knowledge base article 2904296.

Finally, if you’re trying to work out how to get a OneDrive for Business app on Windows Phone, the OneDrive app can connect to both OneDrive and OneDrive for Business.