Using rsync to keep folders in sync on a Synology Diskstation NAS

Now I have backups working between my Synology Diskstation NAS and a storage account in Microsoft Azure (with over half a TB of photos so far backed up in the cloud), the next stage is to consolidate some more images into the folder that the backup works from.

I don’t want to remove them from their source (which in this case is the copy of my OneDrive data on my home drive) but I do want to archive all of the iPhone images I have there to the master photos folder so they are included in the backup.

Reading around the Synology forums suggests that this is not as straightforward as one might think. It appears there’s no easy way to synchronise two folders on the same NAS within the DSM software; but then I stumbled across Zarino Zappia (@zarino)’s post about a Synology-flavoured rsync backup script.

By following Zarino’s advice and using ssh to connect to the box as admin, I was able to achieve what I wanted with the following command:

rsync --dry-run --itemize-changes --archive --progress --verbose --inplace --exclude '*@SynoResource' --exclude '@eaDir' --exclude '*.vsmeta' --exclude '.DS_Store' --exclude 'Thumbs.db' /volume1/homes/mark/OneDrive/iPhone\ Photos/ /volume1/photos/Digital\ Photos\ /(Master/)/Mark\'s\ iPhone/

(BTW, right click is the way to paste text to the command line in PuTTY!)

Some people on the Synology forums had suggested synchronising via another computer on the network would be fast! That sounds strange to me – logically a copy will always be faster on a single device with no network in between. For reference, it took about 20 minutes to rsync 32GB of images/videos on my on my DS916+.

Incidentally, the Error 23 in the screen shots was actually a typo in my command (missing space before one of the –exclude options). I re-ran with –dry-run to see which files were not transferred…

The next step will be to script this and get it running as a scheduled task but that can wait for another day…

Microsoft Azure URLs

I’ve been doing a lot of Azure reading recently and it struck me that there are many different URLs in use that would be useful to record somewhere.

John Savill (@NTFAQGuy) has noted the main ones in his Windows IT Pro post on Azure URLs to whitelist but I’ll expand on them here to highlight the purpose of each one (and to add some extras):

* is still used for the legacy (classic) Azure Service Manager (ASM) portal. redirects to

* is used for the Azure Resource Manager (ARM) portal. redirects to


This URL pattern is used for access to Azure Storage:

  • File access:
  • Containers in blob storage:
  • Table storage:
  • Queue storage:


Domain name used for cloud services.


Domain name used for Azure App Service websites. Each site also has Kudu at


Domain name used for Azure SQL database.


Domain name used for Azure Traffic Manager.


Content Delivery Network (CDN) endpoints are available via


Domain name used for steaming media services (


Domain name used for the Microsoft Online Services tenant ( – shared between multiple online services, including Azure but also Office 365, etc.

[Edited 12/9/16 – updated to include streaming media services and tenant URL]

Removing an auto-signature on the MSDN and TechNet forums

Back in 2008, I was awarded Microsoft Most Valuable Professional (MVP) status for Virtual Machine technology. Unfortunately I wasn’t doing enough Hyper-V work (and my employer at the time didn’t understand the value of employing an MVP) so, in 2010, the Product Group had to let me go. Disappointing as that was, I understood and I moved on to do other things.

Every time I post on the MSDN or TechNet forums though, there’s a signature appended to my posts that says “Mark Wilson (MVP Virtual Machine) –”. I’ve been editing manually to remove the MVP text (I don’t want to make false claims about my status) but I couldn’t see how to remove the option completely – it didn’t seem to appear anywhere in my forum profile.

Only after posting on the forums to ask how I prevent this behaviour, did I find the answer in a “related thread” that was highlighted:

“Yes, you can change that setting by clicking on Quick Access, and then My Settings. You’ll then see a section where you can add your signature.

For a more detailed guide including screenshots and instructions on how to insert HTML content see this post

Thanks to Keith Langmaid for that gem!

My forum preferences have now been duly edited to remove the offending text!

My first few weeks with a Synology Diskstation NAS

Earlier this summer, I bought myself a new NAS. I’d lost faith in my old Netgear ReadyNAS devices a while ago, after a failure took out both halves of a RAID 1 mirror and I lost all the data on one of them. That actually taught me two important lessons:

  1. Data doesn’t exist unless it’s backed up in at least two places.
  2. RAID 1 is not suitable for fault-tolerant backups.

As I wrote a few weeks ago, my new model is to get all of the data into one place, then sync/archive as appropriate to the cloud. Anything on any other PCs, external disks, etc. should be considered transient.

For the device itself, it seems that there are only really two vendors to consider – QNAP or Synology (maybe a Drobo). I chose Synology – and elected to go with a 4-bay model, picking up a Synology Diskstation DS916+ (8GB) and kitting it out with 4 Hitachi (HGST) Deskstar NAS drives.

Unfortunately, I had a little hiccup in that I’d ordered the device pre-configured. The weight of the disks was clearly too much for the plastic drive carriers to cope with but, once again, sorted things out for me and I soon had a replacement in my possession.

Over the last few weeks, I’ve been building up what I’m doing with the Diskstation: providing home drives for the family; syncing all of my cloud storageacting as a VPN endpoint; providing DHCP and DNS services; running anti-virus checks; and backing up key files to Microsoft Azure.

This last workload is worthy of discussion, as it took me a couple of weeks to push my data to the cloud. Setup was fairly straightforward, following Paris Polyzos (@ppolyzos)’s advice to backup Synology NAS data in Microsoft Azure Cool Storage but the volume of data and the network it had to traverse was more problematic.

Initially I had issues with timeouts due to a TP-Link HomeplugAV (powerline Ethernet) device between the ISP router and the DNS server that kept failing. I worked around that by moving DNS onto the NAS, and physically locating the NAS next to the router (bypassing the problematic section of network). Then it was just a case of waiting for my abysmal home Internet connection to cope with multi-GB upstream transfers…

I have no doubts that this NAS, albeit over-specified for a family (because I wanted an Intel-based model), is a great device but I did need to work around some issues with vibration noise. It’s also slightly frustrating that there is no integration between the DHCP and DNS services (I’ve been spoiled working with Windows Server…), the Security Advisor reports are a bit dramatic, and some of the Linux commands are missing – but I really haven’t found anything yet that’s a show-stopper.

Now I need to get back to consolidating data onto the device, and moving more of it into the cloud…

Preventing vibration noise on a Synology NAS

My Synology Diskstation NAS (DS916+) has been a great purchase but I have had some issues with noise from vibration. Over a course of a few weeks, complaints from family members meant that I had to move the NAS from my desk, onto the floor, then into the garage (before I brought it into the kitchen to be next to the Internet connection – but that’s another story). You should be able to hear the noise in the video below (though it seems much louder in real life!):

As can be heard, the vibration noise reduces when I put pressure on the chassis. It seems that it’s actually caused by the screw-less drive carriers that Synology use on their NASs.

Thanks to advice from Chipware on Reddit, I was able to add some sticky-backed Velcro (just the fluffy side) between the disk carrier and the disk, and on the outside of the disk carriers. They now better fit the NAS and, crucially, the Velcro serves as a shock absorber, preventing any more vibrations…

And, at just £2 for a metre of sticky-backed Velcro (which I only used a few centimetres of), it was a pretty inexpensive fix.

Chipware says in his post that:

“I definitely think the 4 Velcro pieces connecting the sled to the cage solved the problem. The pieces between drive and sled connection provides negligible dampening.”

I initially only put 4 pieces on the outside of the carrier (2 of them can be seen in the picture) but my experience was that adding 2 more pieces on the disk itself (underneath the carrier) also helped. Of course, your mileage my vary (and any changes you make are at your own risk – I’m not responsible for any problems it may cause).

After making these modifications there’s no more noise, just a relatively quiet fan noise (as to be expected) and the NAS is back on my desk!

A “Snooper’s Charter” for the postal system?

I spotted this on my Facebook feed today, from an old University friend, who now works as a Senior Cyber Security Consultant:

“I will shortly be writing to my MP urging him to push the Cabinet to extend it’s Investigatory Powers Bill to mandate that all mail carriers must open all letters they collect, scan their contents, and store those images in an archive for a given period in case law enforcement agencies needed to review their contents. Furthermore, I think it would be reasonable outlaw glue on envelopes altogether…with a recommendation to allow postcards only.

I urge the rest of the UK to do the same as a matter of priority due to concerns around National Security.”

He always had a wicked sense of humour but for those who think this is just banter, it really is the postal mail equivalent of what the UK Government is proposing for email in the Investigatory Powers Bill (nicknamed “The Snooper’s Charter”). The staggering thing is that the UK public is largely unaware – generally engagement with politics here is low and I’d wager that the combination of politics and technology has a particularly high “snooze factor”.

[Perhaps Parliament needs to be transformed to involve some kind of “bake-off” type element with MPs getting voted out each week based on their performance. The Westminster Factor. Britain’s Got Legal Talent. Would that get the public involved?]

Putting aside low social engagement in politics (or anything that’s not a big competition on TV) this quote highlights how out of touch our legislators are with the realities of digital life – and how ridiculous the new law would be if applied to analogue communications…

Android for under-13s: no Google accounts; no family sharing

We’re entering a new phase in the Wilson family as my eldest son starts secondary school next week and my youngest becomes more and more tech-aware.

The nearly-10 year-old just wants a reasonably-priced, reasonably-specced tablet as my original iPad is no longer suiting his needs (stuck back on iOS 5 and with a pretty low spec by today’s standards) – I’m sure we’ll work something out.

A bigger challenge is a phone for the nearly-12 year-old. We’ve said he can have his own smartphone when his birthday comes and effectively there are 3 (well, 2) platforms to consider:

  • Windows Mobile: limited app availability; inexpensive handsets; uncertain future.
  • Apple iOS: expensive hardware; good app support.
  • Google Android: wide availability of apps and hardware; fragmented OS.

Really, Windows isn’t an option (for consumers – different story in the enterprise); Apple is only viable if he has a hand-me-down device (which is a possibility); but he’s been doing his research, and is looking at price/specs for various Android devices.  The current favourite is an Elephone P9000 – which looks like a decent phone for a reasonable price – as long as I can find a reliable UK supplier (i.e. not grey market).

In the meantime, and to see how he gets on before we commit to a device purchase, I’ve given him an old Samsung Galaxy S3 Mini that I had in a drawer and I put a giffgaff SIM in. Because it’s a Google device, he gets the best experience if he uses a Google account… and that’s where the trouble started.

We went to sign-up, added some details, and promptly found that you have to be 13 to open a Google account. And unlike with Apple iCloud Family Sharing, where I have family sharing set up for the old iPhones that the boys use around the house, the Google equivalent (Google Play Family Library) also needs all of the family members to be at least 13.  There simply appears to be no option for younger children to use Google services.

Maybe that’s because Google’s primary business is about selling advertising and advertising to children is questionable from a moral standpoint (though YouTube have come up with a child-friendly product).

I tried signing in as me – which let me download some apps but also meant he had access to my information – like all of my contacts (easily switched off but still undesirable).

Luckily, it seems I created him a GMail account when he was 5 weeks old (prescient, some might say) and I was able to find my way into that and get him going. Sadly, it seems I was not as mentally sharp when his little brother was born…

(As an aside, I originally gave my son a Nokia “feature phone” to use and he looked bemused – he later confessed that was because he didn’t know how to use it!)

Postscript: I’ve since given my youngest son my Tesco Hudl and was able to sign up for a Google account without being asked to provide date of birth details…

Cyclist abuse

Today, the phrase “Jeremy Vine” is trending on Twitter after the BBC presenter published a video of the abuse he allegedly suffered at the hands of a motorist who didn’t like the way he cycled through West London:

To be fair, Mr Vine does appear to have stopped his bike and blocked the road when he could simply have pulled over as the road widened but the tirade of verbal (and it seems physical) abuse poured on him was totally unreasonable. Sadly, this kind of behaviour is not unusual, though most of us are not prominent journalists with a good network of media contacts to help highlight the issue:

[In addition to driving an average of around 25,000 miles a year for the last 27 years)] I regularly cycle – road, mountain and commuting – and, whilst it should be noted I see a fair amount of cyclist-induced stupidity too, Jeremy Vine’s incident is not an isolated one.  Just this weekend:

  • I was cycling downhill in the town where we live, following my son at around 28mph (in a 30mph limit) when an impatient Audi driver decided to squeeze into the gap between Father and Son, and then tailgate my 11yo as he rode along. My son pulled over when it was safe to do so but he was scared – and there was no justification for the driver’s actions.
  • Then, whilst out with a small group yesterday morning, the driver of a Nissan Qashqai tore past sounding a long blast on his horn (presumably in protest that two of the three of us were riding side by side – which is perfectly acceptable, especially as this was not a narrow road). That kind of behaviour is pretty normal, as pretty much any road cyclist will attest…
  • Finally, whilst turning left, a motorist overtook me, on the junction itself, leaving around 18 inches to ride in between his car and the kerb, rather than follow the highway code ruling to “give motorcyclists, cyclists and horse riders at least as much room as you would when overtaking a car”. I called out and was actually forced to use his car to steady myself. As he drove off, the usual hand signals were observed, along with some unintelligable expletives (from the driver, not me – I was in shock).

All of this in around 24 hours – and against a landscape where there are far more cyclists on UK roads (so motorists are more aware of us)…

Maybe it was all just a bit of Bank Holiday summer madness…

Not all software consumed remotely is a cloud service

Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Improving application performance from Azure with some network routing changes

Over the last few months, I’ve been working with a UK Government customer to move them from a legacy managed services contract with a systems integrator to a disaggregated solution built around SaaS services and a virtual datacentre in Azure.  I’d like to write a blog post on that but will have to be careful about confidentiality and it’s probably better that I wait until (hopefully) a risual case study is created.

One of the challenges we came across in recent weeks was application performance to a third-party-hosted solution that is accessed via a site-to-site VPN from the virtual datacentre in Azure.

My understanding is that outside access to Microsoft services hits a local point of presence (using geographically-localised DNS entries) and then is routed across the Microsoft global network to the appropriate datacentre.

The third-party application in Bedford (UK) and the virtual datacentre is in West Europe (Netherlands) so the data flows should have just been in Europe.  Even so, a traceroute from the third-party provider’s routers to our VPN endpoint suggested several long (~140ms) hops once traffic hit the Microsoft network. These long hops were adding significant latency and reducing application performance.

I logged a call under the customer’s Azure support contract and after several days of looking into the issue, then identifying a resolution, Microsoft came back and said words to the effect of “it should be fixed now – can you try again?”.  Sure enough, ping times (not the most accurate performance test it should be said) were significantly reduced and a traceroute showed that the last few hops on the route were now down to a few milliseconds (and some changes in the route). And overnight reports that had been taking significantly longer than previously came down to a fraction of the time – a massive improvement in application performance.

I asked Microsoft what had been done and they told me that the upstream provider was an Asian telco (Singtel) and that Microsoft didn’t have direct peering with them in Europe – only in Los Angeles and San Francisco, as well as in Asia.

The Microsoft global network defaults to sending peer routes learned in one location to the rest of the network.  Since the preference of the Singtel routes on the West Coast of the USA was higher than the preference of the Singtel routes learned in Europe, the Microsoft network preferred to carry the traffic to the West Coast of the US.  Because most of Singtel’s customers are based in Asia, it generally makes sense to carry traffic in that direction.

The resolution was to reconfigure the network to stop sending the Singtel routes learned in North America to Europe and to use one of Singtel’s local transit providers in Europe to reach them.

So, if you’re experiencing poor application performance when integrating with services in Azure, the route taken by the network traffic might just be something to consider. Getting changes made in the Microsoft network may not be so easy – but it’s worth a try if something genuinely is awry.