Not all software consumed remotely is a cloud service

Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Improving application performance from Azure with some network routing changes

Over the last few months, I’ve been working with a UK Government customer to move them from a legacy managed services contract with a systems integrator to a disaggregated solution built around SaaS services and a virtual datacentre in Azure.  I’d like to write a blog post on that but will have to be careful about confidentiality and it’s probably better that I wait until (hopefully) a risual case study is created.

One of the challenges we came across in recent weeks was application performance to a third-party-hosted solution that is accessed via a site-to-site VPN from the virtual datacentre in Azure.

My understanding is that outside access to Microsoft services hits a local point of presence (using geographically-localised DNS entries) and then is routed across the Microsoft global network to the appropriate datacentre.

The third-party application in Bedford (UK) and the virtual datacentre is in West Europe (Netherlands) so the data flows should have just been in Europe.  Even so, a traceroute from the third-party provider’s routers to our VPN endpoint suggested several long (~140ms) hops once traffic hit the Microsoft network. These long hops were adding significant latency and reducing application performance.

I logged a call under the customer’s Azure support contract and after several days of looking into the issue, then identifying a resolution, Microsoft came back and said words to the effect of “it should be fixed now – can you try again?”.  Sure enough, ping times (not the most accurate performance test it should be said) were significantly reduced and a traceroute showed that the last few hops on the route were now down to a few milliseconds (and some changes in the route). And overnight reports that had been taking significantly longer than previously came down to a fraction of the time – a massive improvement in application performance.

I asked Microsoft what had been done and they told me that the upstream provider was an Asian telco (Singtel) and that Microsoft didn’t have direct peering with them in Europe – only in Los Angeles and San Francisco, as well as in Asia.

The Microsoft global network defaults to sending peer routes learned in one location to the rest of the network.  Since the preference of the Singtel routes on the West Coast of the USA was higher than the preference of the Singtel routes learned in Europe, the Microsoft network preferred to carry the traffic to the West Coast of the US.  Because most of Singtel’s customers are based in Asia, it generally makes sense to carry traffic in that direction.

The resolution was to reconfigure the network to stop sending the Singtel routes learned in North America to Europe and to use one of Singtel’s local transit providers in Europe to reach them.

So, if you’re experiencing poor application performance when integrating with services in Azure, the route taken by the network traffic might just be something to consider. Getting changes made in the Microsoft network may not be so easy – but it’s worth a try if something genuinely is awry.

Short takes: calculating file transfer times; Internet breakout from cloud datacentres; and creating a VPN with a Synology NAS

Another collection of “not-quite-whole-blog-posts”…

File transfer time calculations

There are many bandwidth/file transfer time calculators out there on the ‘net but I found this one particularly easy to work with when trying to assess the likely time to sync some data recently…

Internet breakout from IaaS

Anyone thinking of using an Azure IaaS environment for Internet breakout (actually not such a bad idea if you have no on-site presence, though be ready to pay for egress data) just be aware that because the IP address is in Holland (or Ireland, or wherever) location-aware websites will present themselves accordingly.

One of my customers was recently caught out when Google defaulted to Dutch after they moved their client Internet traffic over to Azure in the West Europe region… just one to remember to flag up in design discussions.

Creating a VPN with a Synology NAS

I’ve been getting increasingly worried about the data I have on a plethora of USB hard disks of varying capacities and wanted to put it in one place, then sync/archive as appropriate to the cloud. To try and overcome this, I bought a NAS (and there are only really two vendors to consider – QNAP or Synology).  The nice thing is that my Synology DS916+ NAS can also operate many of the network services I currently run on my Raspberry Pi and a few I’ve never got around to setting up – like a VPN endpoint for access to my home network.

So, last night, I finally set up a VPN, following Scott Hanselman’s (@shanselman) article on Setting up a VPN and Remote Desktop back into your home. Scott’s article includes client advice for iPhone and Windows 8.1 (which also worked for me on Windows 10) and the whole process only took a few minutes.

The only point where I needed to differ from Scott’s article was the router configuration (the article is based on a Linksys router and I have a PlusNet Hub One, which I believe is a rebadged BT Home Hub). L2TP is not a pre-defined application to allow access, so I needed to create a new application (I called it L2TP) with UDP ports 500, 1701 and 4500 before I could allow access to my NAS on these ports.

Creating an L2TP application in the PlusNet Hub One router firewall

Port forwarding to L2TP in the PlusNet Hub One router firewall

End user computing – the device doesn’t matter

Following a recent Windows update that “went bad”, I needed to have my work PC rebuilt.  That left me with a period when I had work to do, but only a smartphone to work on or my personal devices. To me, this was also a perfect opportunity to put cloud services to work.

So, armed only with a web browser on another PC, I was perfectly able to access email and send/receive IMs (it’s all in Office 365), pester people on Yammer, catch up on some technical videos, etc. There was absolutely nothing (technically) preventing me from doing my job on another device. That’s how End User Computing should work – providing a flexible computing workstyle that’s accessible regardless of the device and the location.

The real issues are not around technology, but process: questions were asked about why I wasn’t following policy and using my company-supplied device; and I was able to answer with clear reasons and details of what I was doing to ensure no customer information was being processed on a non-corporate device. There are technical approaches to ensuring that only approved devices can be used too – but what’s really needed is a change of mindset…

Short takes: pairing my headphones, firewalls and Exchange SMTP communications, tethered photos with a Mac

Some more snippets that don’t quite make a blog post…

Because I always forget how to do this: how to pair a Plantronics BackBeat PRO headset with a mobile device.

And a little tip whilst troubleshooting connectivity to an Exchange Server server for hybrid connectivity with Office 365… if telnet ipaddress 25 gives a banner response from the SMTP server then that’s a good thing – if the firewall is interrupting transmission then I’ll get nothing back, or asterisks ********. Joe Palarchio (@JoePalarchio) writes about this (see issue 7) in his post on Common Exchange Online Hybrid Mail Flow Issues. Note that firewalls doing any form of blocking between Exchange servers are unsupported but that doesn’t stop customers from putting them between their email servers and anything running in the cloud (e.g. Hybrid server in Azure).  If you need to do this, then you should have any ANY/ANY rule (i.e. allow free flow of traffic) between the Exchange Server servers.

Take photos with OS X Image CaptureFinally, back in 2009, I  wrote about tethering a DLSR to a computer and taking pictures using Windows PowerShell (I think I’ve also written about using software to do this). Well, it turns out that the OS X Image Capture utility can also take a photo on a supported camera – either on a timed basis or by pressing a key.  Could be useful to know if setting up a time-lapse, or for studio work…

Copy NTFS permissions from one folder/file to another

I’m working with a customer who is migrating from on-premises datacentres to the cloud – using a virtual datacentre in Microsoft Azure. One of the challenges we have is around the size of the volumes on a file server: Azure has a maximum disk size of 1023GB and the existing server has a LUN attached that exceeds this size.

We can use other technologies in Windows to expand volumes over multiple disks (breaking the <1TB limit) but the software we intend to use for the migration (Double Take Move) needs the source and target to match. That means that the large volume needs to be reduced in size, which means moving some of the data to a new volume (at least temporarily).

One of my colleagues moved the data (using a method that retained permissions) but the top level folders that he created were new and only had inherited permissions from their parent. After watching him getting more and more frustrated, manually configuring access control lists and comparing them in the Windows Explorer GUI, I thought there had to be a better way.

A spot of googling turned up some useful information from forums and this is what I did to copy NTFS permissions from the source to the target (thanks to Kalatzis Stefanos for his answer on Server Fault).

First of all, export the permissions from the source folder with the icacls.exe command:

icacls D:\data /save perms.txt [/t /c]

/c is continue on error; /t is to work through subfolders too

Then, apply these permissions to the target volume. They can be applied at volume level, because the export includes the file names and an associated ACL (i.e. it only applies to matching files)

icacls D:\ /restore perms.txt

But what if the source and destination folders/files have different names? That’s answered by Scott Chamberlain in another post, which tells me I can just edit my perms.txt file and change the file/folder name before each ACL.

By following this, I was able to export and re-apply permissions on several folders in a few minutes. Definitely a time saver!

Reflecting on riding the #RideStaffs 68-mile sportive

Back in 2013, when I bought my first road bike since the “racer” of my teens, the first sportive I took part in was the Tour [of Britain] Ride in Staffordshire – setting out from Stoke-on-Trent. Now I work for a Stafford-based IT services company and when I heard we were sponsoring the Staffordshire Cycling Festival (@RideStaffs) it gave me a chance to a return visit, although a little further south this time!

(Ironically, the Tour Ride has moved to my home county of Northamptonshire this year… but I can’t make it.)

So, last Sunday, blessed with some summer sunshine (at last!) I rocked up at Shugborough Hall wearing my risual orange jersey, the only one of the team joining the 68 mile sportive (though quite a few of the guys took part in the 22 miler).

With rolling hills from the off, at Milford we took a sharp left and then Bang! we hit the climb up onto Cannock Chase. The first 30 minutes were slow, grinding my way up onto the Chase until we turned left on Brindley Heath and headed down towards Rugeley. I’d just got going at full speed (hitting just over 60kph) when I realised I needed to take a left turn half way down a hill and grabbed the brakes hard – no discs on my road bike! I managed to scrub off speed and make the turn, then hooked onto the back of a small peloton with 2 other riders down towards Rugeley. After taking turns for a while, we hit the A51 and missed the route sign – but it seemed wrong to be heading west so quickly and, as we were heading back towards Shugborough, I turned around and retraced my steps, picking up the correct route again a mile back down the road and passing my hotel from the previous night!

The next section took in mostly flat roads near Lichfield and Alrewas, nipping over the border into Derbyshire before turning over the River Trent and up to the first stop at Barton-under-Needwood. After taking on water and flapjack I started chatting with the owners of two beautifully restored 1970s Colnagos with glorious etching and chromework, one of whom even had a traditional wool jersey, cap (no helmets in the ’70s I guess) and leather saddle bag!

Despite my slow start, I’d averaged over 27kph but realised why as we set off again towards Uttoxeter – turning into the wind that had previously been helping me along (though Hanbury Bank offered a welcome break) . To make matters worse my bike seemed to be grinding from the bottom bracket… time to see Kev at Olney Bikes again for repairs…

After another stop in Uttoxeter (where one rider was conducting the town band – he later told me they split over “musical differences”!) we set off again over some undulating terrain towards the last major climb at Sandon (and what a killer that was).

I skipped the final stop (it was only for water and was carrying plenty of fluids) and pushed on with a large group riding into Stafford – past the Technology Park where our offices are – but was dropped again as we turned left up past the University. From there it was a steady ride on into Shugborough… ending slightly-extended 68 mile ride!

As I crossed the line, I was handed my goody bag musette style, including a variety of items but most importantly a beer token!  My official time was a respectable 5 hours 8 minutes, but Strava told me I’d only been moving for 4 hours and 39, climbing 1235 metres in the process.

Even though I’d missed the rest of the risual riders (the 22 mile sportive set off later and obviously got back sooner!) I stuck around for a while to watch some of the Tour de France coverage and got some lunch from the wood fired pizza stand (a long wait but nice pizza), before heading home… wishing I hadn’t picked a sportive quite so far away!

All in all, it was a fantastic day – and I was very lucky with the weather. Paul at Leadout Cycling organised a great event and I hope to make it back another year. It was also a timely reminder that, even without heading up onto the North Staffordshire Moorlands, there are still plenty of hills around Staffordshire and that my normal routes around South Northants, North Bucks and Beds are relatively flat by comparison…

…as well as that it’s just 4 more weeks until my next sportive – 100 miles from London to Surrey and back again (hopefully not cut short this year)!

Thoughts on the use of Sway as a presentation tool

A couple of weeks ago, I gave a short talk on adopting cloud services at Milton Keynes Geek Night (MKGN). I’ll admit being a little nervous – the talk was supposed to be 5 minutes (and I had more to say than would ever have fitted – I later learned it’s pretty rare for anyone to stick to their allotted time) and I’m not used to speaking to an audience larger than a meeting room-full (a typical MKGN audience in the current venue is around about 100).  Just to make things a little harder for myself, I decided to use Microsoft Sway for my visual aids.

For those who are unfamiliar with Sway, I got excited about it when it first previewed in 2014. Since then it’s shipped and is available as part of Office 365 or as a standalone product. It’s a tool for presenting content from a variety of sources in a visually-appealing style that works cross-platform and cross-form factor.

Even though Sway has an app for Windows 10, some of the content (e.g. embedded tweets) relies on having an Internet connection at the time of presenting.  Wi-Fi at conferences is notoriously bad and 3G/4G at the MKGN venue is not much better (although it did hold up for me on the night). So, with that and the 7Ps in mind I had PowerPoint and PDF fallback plans but I persisted with Sway.

I’m still not sure Sway is a presentation tool though…

You see, as I swiped and clicked my way through, the audience saw everything I saw. I prefer the simplicity of a picture, with my notes on my screen – I talk, the audience listens, the image re-enforces the view. Sway didn’t work for me like that. Indeed, Sway falls into what Matt Ballantine recently described as the latest whizz-bang tool in a post about a request he was given to knock up a few slides of PowerPoint:

“PowerPoint [… is …] rarely used to perform the task it was designed to do […] The latest whizz-bang tool is the answer! Prezi, Sway or whatever it is that the cool kids are using. Actually, though, the answer probably lies as much in new skills that people need to develop to communicate in a Digital era. Questions like:

  • Who is your audience?
  • What is the message that you are trying to deliver?
  • Where will they be?
  • How will they consume your content?
  • How can you extend the conversation?”

We use Sway at work for weekly updates on what’s been happening in the company – internal communications that used to make use of lengthy HTML emails (I almost never used to read to the end) became more immersive and easier to engage with. And that’s where I think Sway fits – as a tool for communications that are read asynchronously. Not as a tool for presenting a message to an audience in real time.

You can see what you think about the use of Sway as a presentation tool when you take a look at the Sway I used for my MKGN talk.

Adventures on a Brompton bike: my first London commute

Those who know me well know that I have a collection of bikes in my garage. Fans of the Velominati will be familiar with rule #12, which states:

“Rule #12: The correct number of bikes to own is n+1.

While the minimum number of bikes one should own is three, the correct number is n+1, where n is the number of bikes currently owned. This equation may also be re-written as s-1, where s is the number of bikes owned that would result in separation from your partner.”

So, it was with great delight that I recently persuaded my wife it would be a great idea for me to buy a new bike. Maybe not the super-light road bike that I might like (I need a super-light Mark before that makes sense anyway) but a commuter. A folding bike to take on the train. A Brompton.

My employer doesn’t take part in a Cycle to Work scheme and Bromptons are pretty pricey (so saving the tax would make a big difference) but I did my research and snapped up a second-hand example with “only 100 miles on the clock” on eBay (checking first to see if it was reported as stolen, of course!). So, on Monday, I was very excited to return home from work to find that my “new” bike (bike number four) had arrived.

For those familiar with Brompton specs, It’s an M3L. I’d like an S6L or S6R but this will do very nicely instead. (If you don’t know what that means, there’s a useful configurator on the Brompton website.)

Yesterday was my first trip to London with the Brommie, so how did it go?

Look out!

Well, my hi-vis purchases from Wiggle haven’t arrived yet and it’s a good idea to be brightly coloured. Nipping up the inside of large vehicles is a very bad idea that’s likely to get you killed but, if you’re confident in traffic, the Brompton is responsive and handles remarkably well.

The biggest problem I had was whilst riding off the end of a bus lane, when a motorist decided that was his (perfectly legal) cue to change lanes in front of me but clearly hadn’t seen me coming. My bell is pretty pathetic for warning car drivers (even with open windows) but my shout of “look out!” worked better. As did my brakes, hastily applied as I brought the Brompton to a skid stop a few inches from the door of the car (don’t tell Mrs W…). No harm done so off we rode/drove. I might invest in an air horn though…

London roads

In common with the rest of the UK, London’s roads are poorly surfaced in places and pretty congested at times. But there are plenty of cycle lanes in central London – including the ability to ride through roads that are closed to motorised traffic (sometimes contra-flow). My normal walking route from Euston to Whitehall through Bloomsbury and Seven Dials worked really well but the reverse was less straightforward. I’ve also ordered some free cycle route maps from Transport for London, so I’ll see if they inspire some nifty short-cuts.

I know some people are critical of the system with painted bike lanes being far less satisfactory than dedicated infrastructure but this is Britain and there’s not a lot of space to share between different types of road user! Even so, with bikes becoming more and more common, I’m sure that motorists are more used to cyclists sharing the road (I have some experience on “Boris Bikes” in London too, prior to buying my Brompton bike).

Folding, carrying, etc.

Watch any experienced Brompton bike user and they fold/unfold their bike in seconds. I currently take a bit more time… though by the end of the day I was starting to get the hang of it! There’s advice on the website (as well as in the manual).  I have to admit it’s a bit heavy to lug around (up stairs, etc.) and I felt like I was Ian Fletcher in an episode of W1A as I walked into the lift but that’s OK. And joking about my cycling attire (I was only wearing a helmet but that didn’t stop the lycra jokes) amused my colleagues and customer!

Sweaty

Clothing could be an issue. I was wearing a suit, with a rucksack on my back to hold my laptop etc. and my coat. That turned out to be a bad idea. I was dripping wet when I got to work… so I’ll need a different luggage solution and maybe a change of clothes (or I may need to see if I can get away without the suit, or at least the jacket…)

Suspension

Next up, the suspension. My Brompton arrived with the standard suspension block but Brompton recommend the firm version for those over 80kg or “who cycle more aggressively and are prepared to sacrifice some comfort”! So, at lunchtime I headed over to Brompton Junction to get a replacement suspension block of the firm variety (the store bike mechanic told me that even lighter people need it as the standard is just too soft). I also picked up a pump as it was missing from my bike (some retailers fit one as standard but maybe not all do) and took a look at some luggage. Expensive but nice. After mulling it over all day, I’ve ordered a waxed cotton shoulder bag which should be in my local branch of Evans Cycles (together with the front carrier block) for collection tomorrow…

So was it worth it?

I live 12 miles from the local railway station, which would be a bit far on a Brompton (it takes 45 mins on a road bike) so I’ll still be driving that part of my journey. Once off the train though, using the bike instead of walking cut my London travel down from about 45 minutes each way to around 15. So saving 30 minutes, twice a day (on the days when I’m in town) gives me back an hour in my day (if I avoid the temptation to use it for work…) – together with more exercise. And I can use the bike and take the train to the office in Stafford now instead of a 200-mile round trip (catching up with some work, reading, or even some sleep on the train). Sounds like a result to me.

Have I been pwned?

You’re probably aware that LinkedIn suffered a major security breach, in which something like 164,611,595 sets of user credentials were stolen. Surprisingly, you won’t find anything about this in LinkedIn’s press releases.

In less enlightened times (and before I started using LastPass), I may have re-used passwords. That’s why breaches like the one at LinkedIn are potentially bad. Re-using that identity means someone can potentially log in as me somewhere else – I could be pwned.

Microsoft Regional Director and MVP, Troy Hunt (@troyhunt) has set up an extremely useful site called HaveIBeenPwned. Entering your email address (yes, that means trusting the site) checks it against a number of known lists and yes, it seems mine was compromised in three hacks (at LinkedIn, Adobe and Gawker). In all of those cases, I’ve since changed my passwords and for popular sites – where they offer the option – I’ve started to use second factor authentication solutions (Azure MFA has been on my Office 365 subscription for a long time, I use Google two-step verification too and, since tonight, I’ve added LinkedIn’s two-step verification and Facebook Login Approvals).

So, I guess the two points of this post are:

  1. For heavens sake stop re-using passwords on multiple sites – you can’t rely on the security of others.
  2. Turn on 2FA where it’s available.

Hopefully one day soon, passwords will be consigned to the dustbin of technology past…