Using Mail Users/Contacts to redirect email in Exchange Online

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Being a geek, I have a few domain names registered, including one that I use for my family’s email. I don’t pay for Exchange Online mailboxes for us all though. Instead, I have a full Office 365 subscription, my wife has Exchange Online and the children (and some other family members) use a variety of free accounts from Apple, Google, Microsoft, etc.

Earlier this evening, my son asked me to switch his family email from iCloud to GMail (he has an Android phone, and was getting annoyed with winmail.dat files in iCloud), so I had to unpick my email redirection method… which seemed like a good time to blog about it…

Obviously, my wife and I have full mailboxes in Exchange Online but other family members are set up as contacts. In the Office 365 Admin Center they show up as unlicenced users but if I drill down into the Exchange Admin Center I have some more control.

Each family member is set up as a contact/Mail User. Each contact has been set up with at least two email addresses:

  • user@myfamilydomain.com (that’s not the real name but it will do for this example);
  • user@externalmailprovider.com (i.e. user@icloud.com, user@gmail.com, user@hotmail.com).

By setting the primary email (the one prefixed with upper case SMTP: rather than lower case smtp:) to the user@gmail.com (or wherever), mail will be received at user@myfamilydomain.com but is redirected to their “real” email address.

Exchange Online shows them as type Mail User and lists their external email address as the primary.

Bids, tenders, requests for information and word counts…

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I won’t go into the details (internal company stuff that shouldn’t be on a blog) but at the moment I’m working on a lot of bids, tenders, requests for quotations, requests for information, etc., etc.

I’ve done this sort of work before and it’s not a great fit for me but sometimes it has to be done. It’s my turn. But I hadn’t realised until recently why it is that I struggle so much…

Over the years, I’ve learned to deal with ambiguity; I’ve learned how to respond without having all the facts. I can write convincing copy (at least I think I can) and I can usually spell (despite a colleague suggesting yesterday that I should check a dictionary because “siloes” didn’t look right to him and maybe it should be “silo’s” – arghhh!).

It was my wife who pointed out to me that the very same attributes and skills that help me as an architect (general pedantry; taking the time to consider the various consequences of choices made; a desire to put in place controls to get things right and to do them well) hinder me in a high-pressure sales environment where I don’t have time to think and where everything is urgent/important and needs to be done NOW (or very soon after now)…

…and relax. Because it’s Friday night. And, in a short while, I will have a beer, or a glass of wine, in my hand.

Anyway, what is the point of this drivel? The ranting ramblings of an Architect? No. Ah, yes, word counts.

Counting words in a document – or in a cell in a spreadsheet…

Lots of bid responses are limited in the number of words that can be accepted. Often, the tool I’m using is Microsoft Word and it’s pretty easy to show the word count for a document or part of a document. Sometimes though, I’m using a different tool to create a document. Like Microsoft Excel.

I was working on a form of response that lists several skills and requires a response of less than a hundred words for each. Sounds easy? Maybe, but thirty 100-word responses are still 3000 words… and only having 100 words to detail experience can be limiting sometimes.

I needed a method to count the number of words in a cell of the spreadsheet and, as usual, I found the answer online:

=IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1," ",""))+1)

Basically, this compares the length of a string with the length of the same string with all the spaces removed and adds 1 (for a single word with no spaces) or returns 0 if there is nothing in the cell (the TRIM function removes any extra spacing). It’s pretty crude but assuming no hyphenated words or solidi (oblique slashes) it will give a good enough count of the number of words in the cell. Definitely a time-saver for me…

Bonus tip

Excel does have a spell checker – it’s just not very obvious. Just press F7 (or go to the Review menu, then choose Spelling). This only works in the desktop client – not Excel Online.

Where did my blog go? And why didn’t I have backups?

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week I wrote a blog post about SQL Server. I scheduled it to go live the following lunchtime (as I often do) and then checked to see why IFTTT hadn’t created a new link in bit.ly for me to tweet (I do this manually since the demise of TwitterFeed).

To my horror, my blog had no posts. Not one. Nothing. Nada. All gone.

Where did all the blog posts go?

As I’m sure you can imagine, the loss of 13 years’ of work invoked mild panic. But never mind, I have regular backups to Dropbox using the WordPress Backup to Dropbox plugin. Don’t I? Oh. It seems the last one ran in March.

History: Backup completed on Saturday March 11, 2017 at 02:24:40.

Ah.

Now, having done some research, my backups have probably been failing since my hosting provider updated PHP on the server. My bad for not checking them more carefully. I probably hadn’t noticed because the backup was partially running (so I was seeing the odd file written to Dropbox) and I didn’t realise the job was crashing part way through.

Luckily for me, the story has a happy ending*. My hosting provider (ascomi) kept backups (thank you Simon!). Although they found that a WordPress restore failed (maybe the source of the data loss was a database corruption), they were able to restore from a separate SQL backup. All back in a couple of hours, except for the most recent post, which I can write again one evening.

So, what about fixing those backups, Mark

Tonight, before writing another post, I decided to have a look at the broken backups.

I’m using WordPress to Dropbox v4.7.1 on WordPress 4.8, both of which are the latest versions at the time of writing. The Backup Monitor log for my last (manual) attempt at backing up read:

22:55:57: A fatal error occured: The backup is having trouble uploading files to Dropbox, it has failed 10 times and is aborting the backup.
22:55:57: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/banner/video-seo.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:56: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/banner/configuration-service.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:56: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/banner/news-seo.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:55: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/editicon.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:54: Processed 736 files. Approximately 14% complete.
22:55:54: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/link-out-icon.svg’ to Dropbox: unexpected parameter ‘overwrite’
22:55:53: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/extensions-local.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:53: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/question-mark.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:48: Processed 706 files. Approximately 13% complete.
22:55:42: Processed 654 files. Approximately 12% complete.
22:55:36: Processed 541 files. Approximately 10% complete.
22:55:30: Processed 416 files. Approximately 8% complete.
22:55:24: Processed 302 files. Approximately 6% complete.
22:55:18: Processed 170 files. Approximately 3% complete.
22:55:12: Processed 56 files. Approximately 1% complete.
22:55:10: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/redirection-en_GB.mo’ to Dropbox: unexpected parameter ‘overwrite’
22:55:10: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/widget-logic-en_GB.mo’ to Dropbox: unexpected parameter ‘overwrite’
22:55:09: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/widget-logic-en_GB.po’ to Dropbox: unexpected parameter ‘overwrite’
22:55:09: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/redirection-en_GB.po’ to Dropbox: unexpected parameter ‘overwrite’
22:55:06: SQL backup complete. Starting file backup.
22:55:06: Processed table ‘wp_yoast_seo_meta’.
22:55:06: Processed table ‘wp_yoast_seo_links’.
22:55:06: Processed table ‘wp_wpb2d_processed_files’.
22:55:06: Processed table ‘wp_wpb2d_processed_dbtables’.
22:55:06: Processed table ‘wp_wpb2d_premium_extensions’.
22:55:06: Processed table ‘wp_wpb2d_options’.
22:55:06: Processed table ‘wp_wpb2d_excluded_files’.
22:55:06: Processed table ‘wp_users’.
22:55:06: Processed table ‘wp_usermeta’.
22:55:06: Processed table ‘wp_terms’.
22:55:06: Processed table ‘wp_termmeta’.
22:55:06: Processed table ‘wp_term_taxonomy’.
22:55:06: Processed table ‘wp_term_relationships’.
22:55:05: Processed table ‘wp_redirection_logs’.
22:55:05: Processed table ‘wp_redirection_items’.
22:55:05: Processed table ‘wp_redirection_groups’.
22:55:05: Processed table ‘wp_redirection_404’.
22:55:03: Processed table ‘wp_ratings’.
22:55:03: Processed table ‘wp_posts’.
22:54:54: Processed table ‘wp_postmeta’.
22:54:49: Processed table ‘wp_options’.
22:54:48: Processed table ‘wp_links’.
22:54:48: Processed table ‘wp_feedfooter_rss_map’.
22:54:48: Processed table ‘wp_dynamic_widgets’.
22:54:48: Processed table ‘wp_comments’.
22:54:45: Processed table ‘wp_commentmeta’.
22:54:43: Processed table ‘wp_bad_behavior’.
22:54:43: Processed table ‘wp_auth0_user’.
22:54:43: Processed table ‘wp_auth0_log’.
22:54:43: Processed table ‘wp_auth0_error_logs’.
22:54:43: Starting SQL backup.
22:54:41: Your time limit is 90 seconds and your memory limit is 128M
22:54:41: Backup started on Tuesday July 25, 2017.

Hmm, fatal error caused by overwriting files in Dropbox… I can’t be the only one having this issue, surely?

Indeed not, as a quick Google search led me to a WordPress.org support forum post on how to tweak the WordPress Backup to Dropbox plugin for PHP 7. And, after making the following edits, I ran a successful backup:

“All paths are relative to $YOUR_SITE_DIRECTORY/wp-content/plugins/wordpress-backup-to-dropbox.

In file Dropbox/Dropbox/OAuth/Consumer/Curl.php: comment out the line:
$options[CURLOPT_SAFE_UPLOAD] = false;
(this option is no longer valid in PHP 7)

In file Dropbox/Dropbox/OAuth/Consumer/ConsumerAbstract.php: replace the test if (isset($value[0]) && $value[0] === '@') with if ($value instanceof CURLFile)

In file Dropbox/Dropbox/API.php: replace 'file' => '@' . str_replace('\\', '/', $file) . ';filename=' . $filenamewith 'file' => new CURLFile(str_replace('\\', '/', $file), "application/octet-stream", $filename)

(actually, a comment further down the post highlights there’s a missing comma after $filename) on that last edit so it should be 'file' => new CURLFile(str_replace('\\', '/', $file), "application/octet-stream", $filename),)

So, that’s the backups fixed (thank you @smowton on WordPress.org). I just need to improve my monitoring of them to keep my blog online, and my blood pressure at sensible levels…

 

*I still have some concerns, because the data loss occurred the night after a suspected hacking attempt on my LastPass account; which seems to have been thwarted by second-factor authentication… at least LastPass say it was…

Serverless and the death of DevOps

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back, I took a trip to London after work to attend the latest CloudCamp meet-up. It’s been a while since I last went to CloudCamp but I was intrigued by the title of the event: “Serverless and the death of DevOps?”. The death of DevOps? Surely not. Most organisations I’m working with are only just getting their heads around what DevOps is. Some are still confusing a cultural change with some tools (hey, we’ll adopt some new tools and rebrand our AppDev function as DevOps). If anything, DevOps is at the top of the hype curve; it can’t possibly be dead!

Well, 5 minutes into the event and, after Simon Wardley (@SWardley)’s introduction, I could see where he was coming from. Mix the following up with some “Wardley Mapping” and you can see that what’s being discussed is not really the death of DevOps (as a concept where development and operations teams work in a more integrated fashion) but it may well be a new cloud computing paradigm, in the form of “serverless” computing (AWS Lambda, Azure Functions, etc.):

  • Back in the beginning of computing, systems were hard-wired (e.g. Colossus).
  • Then, they evolved and we had custom-built computing (e.g. Leo) with the concept of applications and an operating system.
  • This evolved and new products (like the IBM 650) were born with novel architectural practices, based around the concept of compute as a product.
  • These systems had a high mean time to recover (MTTR) so the architecture of the day was designed around N+1, DR tests, scaling up.
  • Evolution continued and novel architectural practices became emerging, then good. Computing became more resilient.
  • Next came frameworks. We had applications and an emerging coding practice based around these frameworks, running on an operating system using good architectural practice, all built around the concept of compute as a product (a server).
  • All was happy.
  • Then along came the cloud. Compute was no longer a product but a utility. It brought new benefits of efficiency, pooling resources, agility. Computing had new sources of worth.
  • And organisations said “make my legacy cloudy” [actually, this is as far as many have got to…].
  • Some people asked “but shouldn’t architecture evolve too?” And, after the initial cries of “burn him, heretic”, a new novel architectural practice emerged, built around a low MTTR. It took seconds to get a new virtual machine, distributed systems were designed for failure, indeed chaos monkeys were introduced to the environment to introduce failure and ensure resilience. We introduced co-evolution (which has been practiced in other fields throughout history) and we called it DevOps.
  • This evolved until it became good architectural practice for the utility world and the old practices for a product world became legacy.
  • The legacy world was held back by inertia but the cloud was about user needs, measurement, automation, collaboration and fast feedback.
  • Then a new tribe began to rise up. Using commodity operating systems and functions as a framework. This framework is becoming a utility. And it will move from emerging to good practice, then best practice and “serverless” will be the future.
  • The old world will become legacy. Even the wonderful world of “DevOps”.
  • But, for now, if we say that “DevOps” is legacy, the response will be “burn him, heretic”.

So that’s the rise of serverless and the “death of DevOps”.

[Simon Wardley does a much better job of this… hopefully, there’s a video out there of him explaining the above somewhere…]

Installing youtube-dl on a Mac to watch YouTube videos when working offline

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My work pattern at the moment means that I’m spending a lot of time travelling on trains up and down the country (in fact, as this post is published, I’ll be somewhere between Bedford and Sheffield). A combination of fatigue and motion sickness means that this isn’t always a good opportunity to work on the train but it is potentially an opportunity to listen to podcasts or, unlike when I’m driving, to watch some videos. Unfortunately, travelling at 125 miles an hour with a varying quality of 4G data signal doesn’t always lend itself well to streaming from YouTube, etc.

That’s where youtube-dl comes in – I can download videos to my MacBook before I leave home, and watch at my leisure on the train. So, how do you get started?

Well, the Mac App Store website helped me. Following advice there, I issued two commands in Terminal to first install the HomeBrew package manager for MacOS, and then to install youtube-dl:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null

brew install youtube-dl

So, with youtube-dl installed, how do I use it? The youtube-dl readme.md file has lots of information but it’s not exactly easy to digest.

I found that:

youtube-dl -F youtubeurl

would give me a list of available video formats for a given URL and reading about YouTube media types led me to the very important number 22 for MP4 video at 720p with H.264 encoding and AAC audio. That should play on a wide variety of devices (including Quicktime on my Mac).

Next, to download:

youtube-dl -f 22 https://www.youtube.com/channel/UCz7bkEsygaEKpim0wu_JaUQ

(this URL is the CloudTechTV channel).

That command brought down all of the videos in the channel but I can also download individual episodes, for example:

youtube-dl -f 22 https://www.youtube.com/watch?v=ymKSGTR55LQ
youtube-dl
I can do something similar for other YouTube videos/channels (and even for some other video services) and build a library of videos to watch on my journeys, without needing to worry about an Internet connection.

“dotnet: command not found” after installing the Microsoft .NET Core SDK on a Mac

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst installing the Microsoft .NET Core SDK on my MacBook earlier last week, I found that the instructions on the Microsoft website were not quite complete.

Microsoft tells us to run a few commands to install OpenSSL:

  1. Install Homebrew (it was already on my system)
  2. Then run:

brew update
brew install openssl
mkdir -p /usr/local/lib
ln -s /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib /usr/local/lib/
ln -s /usr/local/opt/openssl/lib/libssl.1.0.0.dylib /usr/local/lib/

After this, you should be able to download and install the .NET Core SDK package (version 1.0.4 seems to be the latest version of the SDK at the time of writing, which includes .NET Core 1.0 and 1.1).

Then, in theory, running the dotnet command should be all that’s required but, for me, it resulted in an error:

-bash: dotnet: command not found

The fix, it seems, is to create another symbolic link:

ln -s /usr/local/share/dotnet/dotnet /usr/local/bin/

After that, dotnet ran as expected.

For reference, My MacBook is running MacOS Sierra version 0.12.5 (16F73).

Introduction to Microsoft .NET Standard

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 2 of a 2-part series. In part 1, I explained .NET versions and the lightweight, cross-platform Microsoft .NET Core SDK. This post moves on to look at standardising Microsoft .NET.

Where does .NET Standard fit in?

Hopefully in part 1, I showed how .NET Core is a fantastically quick to run set of development tools that can run across platforms but you might also have heard of .NET Standard.

Over the years, the .NET landscape has become pretty complicated (the discussion on versions just scrapes the surface) and we should really think of “.NET” as a big umbrella for marketing purposes with three main instances:

  1. Microsoft .NET Framework is really the .NET (Full) Framework – it runs on Windows clients and servers.
  2. Microsoft .NET Core is for server applications with no UI and is ideally suited to microservices, Docker containers, web apps and more. It runs on Windows, MacOS and multiple Linux distributions.
  3. Mono (led by Xamarin) is an open source cleanroom implementation of .NET – written without access to the code. Its developers looked at the interfaces/shape and made new versions, which means it has the benefit of running anywhere (e.g. “exotic places” like a Sony Playstation, or on tiny devices).

That means .NET code can, theoretically run anywhere. Except that the versioning gets in the way… and that’s where .NET Standard fits in.

.NET Standard is a target. Not a platform. Not a runtime. It’s an “agreement”.

If that’s difficult to understand, here’s the analogy thatScott Hanselman (@shanselman) used in a recent webcast: Android developers don’t target an Android OS (e.g. v4.3), they target an API level (e.g. 15). If a new API is released that provides new capabilities, a developer can move to the new API level but it might not work on older devices.

Microsoft has something like this in Portable Class Libraries, except they are the lowest common denominator – the centre of a Venn diagram. .NET Standard is about running anywhere so that if a developer targets their application to a standard, they can be sure it will run wherever that standard is supported.

The .NET Platform Support table demonstrates this and indicates the minimum version of the platform that’s needed to implement that .NET Standard.

.NET Platforms Support table
.NET Platforms Support table as at July 2017

For example, if you want to run on .NET Framework 4.5 and .NET Core 1.0, the highest .NET Standard version you can use is .NET Standard 1.1; if you want to run on Windows 8.1 you can’t go above .NET Standard 1.2; etc. The higher the version, the more APIs are available and the lower the version, the more platforms implement the standard. Developers target a framework for specific Windows APIs and target a .NET standard for portability and flexibility.

Resources

Introduction to Microsoft .NET Core

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 1 of a 2-part series, looking at .NET versions and the Microsoft .NET Core SDK. Tomorrow’s post moves on to look at Microsoft .NET Standard.

.NET versions

The first thing to understand is that .NET versioning is a mess [my words, not Scott’s]:

On a Windows machine with Visual Studio installed, open a command prompt and type clrver -all. Chances are you’ll see versions 2.0 and 4.0. cd C:\Windows\Microsoft.NET\Framework and on my Windows 10/Office 2016 machine I can see versions 1.0.3705, 1.1.4322, 2.0.50727 and 4.0.30319.

So where is .NET 3.0? Well, 3.0 and 3.5 were the Windows Presentation Framework (WPF) and Windows Communication Framework (WCF), running on v2.0 of the Common Language Runtime (CLR). They are actually part of Windows!

All of this makes developing for the .NET framework tricky. Side by side running for .NET 4.x is at the CLR level – if your app is developed for 4.6 and others are for 4.0, you may have a hard time convincing IT operations people to update and chances are you’ll need to drop down to 4.0.

That (together with cross-platform capabilities) is where .NET Core comes in.

Creating simple applications with the .NET Core SDK

.NET Core is in C:\Program Files\DotNet and is implemented as a driver (dotnet.dll). With a few simple commands we can create and run a console application:

dotnet new console

dotnet restore

dotnet run

That’s three steps to Hello World!

It’s just as simple for a web application:

dotnet new web

dotnet restore

dotnet run

Then browse to http://localhost:5000.

One difference in the web app is the presence of the

<Project Sdk="Microsoft.NET.Sdk.Web">

line in the .csproj file. This signifies a “meta package” – a package of packages that avoids explicitly listing multiple package references (for cleaner code).

Unlike in the (full) .NET Framework, we can specify the version of .NET Core to use in the .csproj file, meaning that multiple versions of the SDK can be used with whatever variety of libraries are needed:

<TargetFramework>netcoreapp1.1</TargetFramework>

We can also add references to libraries with dotnet add reference libraryname. Adding a package with dotnet add package packagename not only adds it but restores, downloads and checks compatibility. Meanwhile dotnet new solution creates a solution file – something that would be complex to do manually.

Why revert to a CLI?

So we’ve seen that you can be a successful .NET programmer without a visual editor. We can build an entire project system from the command line. .NET Core is not just about involving the Linux community with a cross-platform version of .NET but also about speed and Scott used an analogy of thinking of the CLI as creating a “2D” version to validate before working on the full “3D” GUI version.

In the background (under the covers), the .NET Core SDK uses the NuGet package manager (dotnet add package), MSBuild (dotnet build) and VSTest (dotnet new xunit and dotnet test).

Visual Studio gives a better user interface but we can develop the basics very quickly using a command line and then get the best of both worlds – both CLI and GUI. What .NET Core does is lower the barrier to entry – getting up and running writing Microsoft.NET code in just a few minutes, for Windows, MacOS or Linux.

Resources

Bose Soundlink Mini II speakers turn off at low volume levels

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Listening to music a couple of nights ago (streamed from Spotify on my MacBook, though I’m not sure how relevant that is), I found that my Bose SoundLink Mini II speakers kept turning off after 5 minutes (running on battery power, connected with a cable). Spotify kept playing but the sound stopped until I turned the speakers off/on again.

I hadn’t seen this issue before – and I was using the same 3.5mm AUX cable setup that I often use with our small TV (to improve its sound quality), so I hit the interwebs to see what I could find…

Some hunting around suggests that the issue may have been the low volume level on my MacBook (I was in the room directly under my youngest son’s bedroom, after bedtime).

“The Speaker does have a power-save mode, but it will generally only enter this when no audio is detected. The most likely explanation here is that the speaker is getting a very weak signal […] and boosting it enormously with its internal amp.

[…]
If you are using a headphone jack or similar […], try increasing the […] output level while turning the speaker’s volume down. This should provide a stronger signal on the AUX port which would prevent the speaker from sleeping automatically.”

Sure enough, increasing the volume on the MacBook to around level 4-5 and decreasing the volume on the speakers seems to stop the power-down. Indeed, to make sure this was the case, I turned the MacBook’s volume back down to 1 and waited for the music to cut out… then, when it did, I just increased the volume to around level 4-5 again and the speakers came alive!

On a related note… I stumbled across these Spotify tips and tricks that might be useful…

Reducing the time taken for a Garmin Edge 25 to find a satellite signal

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Regular readers will know that cycling is one of my hobbies (with my eldest son looking like he may follow in the same direction…).

I have a Garmin Edge 810 cycle computer to use on my big rides but for commutes (e.g. on the Brompton) I use a smaller unit – an Edge 25. The Edge 25 is a cracking little unit, with all the basic functionality I’d expect and Bluetooth connectivity, but one of the issues I’ve found is that it can be slow to pick up a GPS signal.

I think I may have made a breakthrough though, thanks to a comment in this review from Average Joe Cyclist:

“Satellite Acquisition on the Garmin Edge 25
The Garmin Edge 25 can connect to both GPS and GLONASS satellites. As it has more satellites to choose from, it can lock in faster. I know that Garmin Edge bike computers with only GPS can be frustratingly slow to lock in, so this is important. It was a very happy surprise to find GLONASS on such a relatively cheap bike computer as the Garmin Edge 25. This is obviously a huge selling point for this tiny bike computer.

Note: GPS and GLONASS are different kinds of satellite systems – the GPS was developed by the USA, and the GLONASS is Russian.”

Sure enough, I checked my settings and GLONASS was off. So I turned it on and limited testing suggests that it may now be faster to pick up a satellite. Time will tell, as will experience with the second Edge 25 that’s in the post for my son to use…

Some more reading suggests that using GLONASS and GPS together may affect battery life but could also improve accuracy. If satellite lock-in is still slow, then a master reset may be required. To reset the Edge 25:

  1. Power on the device whilst holding the two right-side buttons down.
  2. Release the top button when you hear the first beep.
  3. Release the bottom button when you hear the second beep.

Garmin Edge software version as viewed in Garmin ConnectI also upgraded the firmware (unfortunately breaking the rule of only changing one thing at a time when troubleshooting tech…) which got me thinking “what firmware did I have before?”. It seems the way to tell this is to view an activity in Garmin Connect, where the details of the device used to upload the data shown on the right-hand side.