Quantum Computing 101

There’s been a lot of buzz around quantum computing over the last year or so and there seems little doubt that it will provide the next major step forward in computing power but it’s still largely theoretical – you can’t buy a quantum computer today. So, what does it really mean… and why should we care?

Today’s computers are binary. The transistors (tiny switches) that are contained in microchips are either off (0) or on (1) – just like a light switch. Quantum computing is based on entirely new principles. And quantum mechanics is difficult to understand – it’s counterintuitive – it’s weird. So let’s look at some of the basic concepts:

Superposition Superposition is a concept whereby, instead of a state being on or off, it’s on and off. At the same time. And it’s everything in the middle as well. Think of it as a scale from 0 to 1 and all the numbers in-between.
Qubit A quantum bit (qubit) uses superposition so that, instead of trying problems sequentially, we can compute in parallel with superposition.

More qubits are not necessarily better (although there is a qubit race taking place in the media)… the challenge is not about creating more qubits but better qubits, with better error correction.

Error correction Particles like an electron have a charge and a spin so they point in a certain direction. Noise from other electrons makes them wiggle so the information in one is leaking to others, which makes long calculations difficult. This is one of the reasons that quantum computers run at low temperatures.

Greek dancers hold their neighbour so that they move as one. One approach in quantum computing is to do the same with electrons so that only those at the end have freedom of motion – a concept called electron fractionalisation. This creates a robust building block for a qubit, one that is more like Lego (locking together) than a house of cards (loosely stacked).

Different teams of researchers are using different approaches to solve error correction problems, so not everyone’s Qubits are equal! One approach is to use topological qubits for reliable computation, storage and scaling. Just like Inca quipus (a system of knots and braids used to encode information so it couldn’t be washed away, unlike chalk marks), topological qubits can braid information and create patterns in code.

Exponential scaling Once the error correction issue is solved, then scaling is where the massive power of quantum computing can be unleashed.

A 4 bit classical computer has 16 configurations of 0s and 1s but can only exist in one of these states at any time. A quantum register of 4 qubits can be in all 16 states at the same time and compute on all of them at the same time!

Every n interacting qubits can handle 2n bits of information in parallel so:

  • 10 qubits = 1024 classical bits (1KiB)
  • 20 qubits = 1MB
  • 30 qubits = 1GB
  • 40 qubits = 1TB
  • etc.

This means that the computational power of a quantum computer is potentially huge.

What sort of problems need quantum computing?

We won’t be using quantum computers for general personal computing any time soon – Moore’s Law is doing just fine there – but there are a number of areas where quantum computing is better suited than classical computing approaches.

We can potentially use the massive quantum computing power to solve problems like:

  • Cryptography (making it more secure – a quantum computer could break the RSA 2048 algorithm that underpins much of today’s online commerce in around 100 seconds – so we need new models).
  • Quantum chemistry and materials science (nitrogen fixation, carbon capture, etc.).
  • Machine learning (faster training of models – quantum computing as a “co-processor” for AI).
  • and other intractable problems that are supercompute-constrained (improved medicines, etc.).

A universal programmable quantum computer

Microsoft is trying to create a universal programmable quantum computer – the whole stack – and they’re pretty advanced already. The developments include:

Quantum computing may sound like the technology of tomorrow but the tools are available to develop and test algorithms today and some sources are reporting that a quantum computing capability in Azure could be just 5 years away.

Booting a Microsoft Surface Pro 3 with a broken screen

A few months ago, the Microsoft Surface Pro 3 that I use for work took a knock at one corner and developed a crack across the screen. I was gutted – I’d really looked after the device and, even though it was approaching three years old (and running like a dog), it was likely I’d be using it for a while longer. I could have swapped it for a conventional Dell laptop but I like to use the Surface Pen when I’m consulting. And now it was broken and beyond economic repair (Microsoft are currently quoting £492+VAT for a screen replacement!)

The screen still functioned as a display but the crack was generating false inputs that made both the Surface Firmware and Windows 10 think that I was touching the screen. That was “fighting” with the trackpad or a mouse, meaning that the device was very difficult to control (almost impossible).

I managed to get it up and running and to log on (just about) so that my support team could remote control the device and disable touch for me. The image below shows the two components that needed to be disabled in Device Manager (Surface Pro Touch Controller Firmware and HID-compliant touch screen):

Windows Device Manager, showing disabled devices to work around issues with a broken screen

The biggest problem was booting the device in the first place though – it would load to the Surface splash screen and then stay there. Presumably, the firmware had detected a problem but the hardware hadn’t actually failed, so there was no error message and no successful boot.

Then I found a forum post that gave me the answer:

  1. Hold Power and Volume Up together until the Surface splash screen appears, then let go of the power button.
  2. When presented with the UEFI menu, press ESC to exit.
  3. Press Enter to confirm that you want to quit without saving.
  4. At this point, you’ll see an underscore (_) cursor. Be patient.
  5. After a few seconds, the BitLocker screen will appear, after which the PIN can be entered and the device boots into Windows.

It’s a bit of a faff, but it’s worked for me for the last few weeks. Just before I handed in the broken device (for a replacement with a functioning screen), I recorded this video in my hotel room – it may come in handy for someone…

How to stay current with Windows as a Service and Office 365 ProPlus

For many organisations, particularly those at “enterprise” scale, Windows and Office have tended to be updated infrequently, usually as major projects with associated capital expenditure. Meanwhile, operational IT functions that manage “business as usual” often avoid change because that change brings risks around the introduction of new technology that may have consequential effects. This approach is becoming increasingly untenable in a world of regular updates to software sold on a subscription basis.

This post looks at the impact of regularly updating Windows and Office in an organisation and how we need to modify our approach to reflect the world of Windows as a Service and “evergreen” Office 365?

Why do we need to stay current?

A good question. After all, surely if Windows and Office are working as required then there’s no need to change anything, is there? Unfortunately, things aren’t that simple and there are benefits of remaining current for many business stakeholders:

  • For the CIO: improved management, performance, stability and support for the latest hardware.
  • For the CSO: enhanced security against modern threats and zero-day attacks.
  • For end users: access to the latest features and capabilities for better productivity and creativity.

Every Windows release evolves the operating system architecture to better defend against attacks – not just patching! And Windows and Office updates support new ways of working: inking, voice control, improved navigation, etc.

So, updates are good – right?

How often do I need to update?

We’re no longer in a world of 5+5 years (mainstream+extended) support. Microsoft has publicly stated its intention to ship two feature updates to Windows each year (in Spring and Autumn). The latest of these is Windows 10 1803 (also known as Redstone 4), which actually shipped in April. Expect the next one in/around September 2018 (1809). Internally to Microsoft, there are new builds daily; and even publicly there are “Insider” Preview builds for evaluation.

That means that we need to stop thinking about Windows feature updates as projects and start thinking about them as process – i.e. make updating Windows (and Office, and supporting infrastructure) part of the business as usual norm.

OK, but what if I don’t update?

Put simply, if you choose not to stay up-to-date, you’ll build up a problem for later. The point about having predictable releases is that it should help planning

But each release is only supported for 18 months. That means that you need to be thinking about getting users on n-2 releases updated before it gets too close to their end of support. Today, that means:

  • Running 1703, take action to update.
  • Running 1709, plan to update.
  • Running 1803, trailblazer!

We’re no longer looking at major updates every 3-5 years; instead an approach of continuous service improvement is required. This lessens the impact of each change.

So that’s Windows, what about Office?

For those using Office 365 ProPlus (i.e. licensing the latest versions of Office applications through an Office 365 subscription), Windows and Office updates are aligned (not to the day, but to the Spring and Autumn cadence):

So, keep Office updated in line with Windows and you should be in a good place. Build a process that gives confidence and trust to move the two at the same time… the traditional approach of deploying Windows and Office separately often comes down to testing and deployment processes.

What about my deployment tools? Will they support the latest updates?

According to Microsoft, there are more than 100 million devices managed with System Center Configuration Manager (SCCM) and SCCM also needs to be kept up-to-date to support upcoming releases.

SCCM releases are not every 6 months – they should be every 4 months or so – and the intention is to update SCCM to support the next version of Windows/Office ahead of when they become available:

Again, start to prepare as early as possible – and think of this as a process, not a project. Deploy first to a limited set of users, then push more broadly:

Why has Microsoft made us work this way?

The world has changed. With Office existing on multiple platforms and systems under constant threat of attack from those who wish to steal our data (and money) it’s become necessary to move from a major update every 3-5 years to a continuous plan to remain in shape and execute every few months – providing high levels of stability and access to the latest features/functionality.

Across Windows, Office, Azure and System Center Microsoft is continually improving security, reliability and performance whilst integrating cloud services to add functionality and to simplify the process of staying current.

How can I move from managing updates as a project to making it part of the process?

As mentioned previously, adopting Windows as a Service involves a cultural shift from periodic projects to a regular process.

Organisations need to be continually planning and preparing for the next update using Insider Preview to understand the impact of upcoming changes and the potential provided by new features, including any training needs.

Applications, devices and infrastructure can be tested using targeted pilot deployments and then, once the update is generally available and known to work in the environment, a broader deployment can be instigated:

Aim to deploy to users following the model below for each stage:

  • Plan and prepare: 1%.
  • Targeted deployment: 9%.
  • Broad deployment: 90%.

Remember, this is about feature updates, not a new version of Windows. The underlying architecture will evolve over time but Windows as a Service is about smaller, incremental change rather than the big step changes we’ve seen in the past.

But what about testing applications with each new release of Windows?

Of course, applications need to be tested against new releases – and there will be dependencies on support from other vendors too – but it’s important that the flow of releases should not be held up by application testing. If you test every application before updating Windows, it will be difficult to hit the rollout cadence. Instead, proactively assess which applications are used by the majority of users and address these first. Aim to move 80-90% of users to the latest release(s) and reactively address issues with the remaining apps (maybe using a succession of mini-pilots) but don’t stop the process because there are still a few apps to get ready!

You can also use alternative deployment methods (such as virtualised applications or published applications) to work around compatibility issues.

It’s worth noting that most Windows 7-compatible apps will be compatible with Windows 10. The same app development platform (UWP), driver servicing model, etc. are used. Some device drivers may not exist for Windows 10 but most do and availability through Windows Update has improved for drivers and firmware. BIOS support is getting better too.

In addition, there are around a million applications registered in the Ready For Windows database, which can be used for spot-checking ISVs’ Windows 10 support for each application and its prevalence in the wild.

New cloud-enabled capabilities to guide your Windows 10 deployment

Windows Analytics is a cloud-based set of services that collects information from within Windows and provides actionable information to proactively improve your Windows  (and Office) environment.

Using Azure Log Analytics, Windows Analytics can advise on:

  • Readiness (Windows 10 Professional): planning and addressing actions for upgrade from Windows 7 and 8.1 as well as Windows 10 feature updates.
  • Compliance (Windows 10 Professional): for regular (monthly) updates.
  • Device health (Windows 10 Professional and Enterprise): assessing issues across estate (e.g. problematic device drivers).

OK, so I understand why I need to continuously update Windows, but how do I do it?

Microsoft recommends using a system of deployment rings (which might be implemented as groups in SCCM) to roll out to users in the 1% (Insider), 9% (Pilot) and 90% (Broad) deployments mentioned above. This approach allows for a consistent but controllable rollout.

Peer-to-peer download technologies are embedded in Windows that will minimise network usage and recent versions support express updates (only downloading deltas) whilst the impact on users can be minimised through scheduling.

When it comes to tools, there are a few options available:

  • Windows Update is the same service used by consumers to download updates at the rate governed by Microsoft.
  • Windows Update for Business is a version of Windows Update that allows an organisation to control their release schedule and set up deployment rings without any infrastructure.
  • Windows Software Update Services (WSUS) allows feature updates to be deployed when approved, and BranchCache can be used to minimise network impact.
  • Finally, SCCM can work with WSUS and offers Task Sequences, etc. to provide greater control over deployment.

What about the normal “Patch Tuesday” updates?

Twice-annual feature updates don’t replace the need to patch more regularly and Microsoft continues to release cumulative updates each month to resolve security and quality issues.

In effect, we should receive one feature update then five quality updates in each cycle:

Where can I find more information?

The following resources may be useful:

 

The contents of this post are based on a webcast delivered by Bruno Nowak (@BrunoNowak), Director of Product Marketing (Microsoft 365) at Microsoft.

Weeknote 16: Anonymous? (Week 17, 2018)

This week has been another one split between two end-user computing projects – one at the strategy/business case stage and another that’s slowly rolling out and proving that the main constraint on any project is the business’s ability to cope with the change.

I can’t say it’s all enjoyable at the moment – indeed I had to apply a great deal of restraint not to respond to lengthy email threads that asked “why aren’t we doing it this way”… but the inefficiencies of email are another subject, for another day.

So, instead of a recap of the week’s activities, I’ll focus on some experiences I’ve had recently with “anonymous” surveys. I’m generally quite cynical of these because if I have to log on to the platform to provide a response then it’s not truly anonymous – a point I highlighted to my colleagues in HR who ask a weekly “pulse” question. “It’s not on your record”, I was told – yet progress is logged against me (tasks due, tasks completed, etc.) and only accessible when I’m logged in to the HR system. It’s the same for SharePoint surveys – if I need to use my Active Directory credentials, then it’s not anonymous.

I’m approaching my third anniversary at risual and I picked up an idea for soliciting feedback (for my annual review) from colleagues, partners and customers from my colleague James Connolly, who has been using a survey tool for a couple of years now. Rather than use one of the tools on the wider Internet, like Survey Monkey or TypePad, I decided to try Microsoft Forms – which is a newish Office 365 capability. It was really simple to create a form (and to make it anonymous, once I worked out how) but what I’ve been most impressed with is the reporting, with the ability to export all responses to Excel for analysis, or to view either an aggregated view of responses or the detail on each individual response within Microsoft Forms.

I went to pains to make sure that the form is truly anonymous – not requiring logon, though I did invite people to leave their name if they were happy for me to contact them about the responses. Even so, with a sample size of around 50 people invited to complete the form and a 50% response rate, I can take a guess at who some of the responses are from. By the same token, there are others where I wish I knew who wrote the feedback so I could ask them to elaborate some more!

I won’t be doing anything with the results, except saying “this is what my colleagues and customers think of me and this is where I need to improve”, but it does re-enforce my thinking that very little in life is truly anonymous.

Next week includes a speaking gig at a Microsoft Modern Workplace popup event (though I’m not entirely comfortable with the demonstrations), more Windows 10 device rollouts and maybe, just maybe, some time to write some blog posts that aren’t just about my week…

Weeknote 9: SharePoint as a CMS, with a little Power BI to help visualise dynamic data (Week 6, 2018)

2018 is flying by but the last couple of weeks have been exciting. After a period of working on short-term engagements (which can be a challenge at times), I’ve landed myself a gig on a decent sized Modern Workplace project that’s going to keep me (and a lot of other people) busy for the next few months. Unfortunately, I can only devote 50% of my time to it for a couple of weeks as I need to clear a few other things out of the way but that will all change soon.

One of those “things” is a project I’ve been working on to provide supplementary information to operators in a part of the critical national infrastructure (I wish I could be less cryptic but I can’t just yet – I hope that maybe one day we can create a case study…). It’s replacing a bespoke system with one built using commercial off-the-shelf (COTS) products, with a little customisation – and it’s been my first “software” project (cf. infrastructure-led engagements).

Basically, we’re using SharePoint as a content management system, receiving both static and dynamic data (the latter via a service bus) that needs to be displayed to operators.

All of the data is stored in SharePoint lists and libraries and then presented to a browser running in kiosk mode. The page layouts then use web parts to either display data natively, or we use Power BI Report Server (this solution runs on-premises) to create visualisations that we embed inside SharePoint.

And, because the service bus isn’t available yet, we had to demo the dynamic data arriving using another tool… in this case, SoapUI populating SharePoint using its REST/OData API.

It’s been an interesting project, not just because I’ve had to step back and focus on just the architecture (leaving others to work on the detail) but because it’s been software-led. I must admit I was nervous hearing status reports from the team about the page layouts they had created, or the webparts they were scripting, and I was thinking “but didn’t you do that last week?” but, once I saw it come together into something tangible, I was really impressed.

Yesterday was our first opportunity to demonstrate the system to our stakeholders and the initial feedback is positive, so that’s a really big tick in the box. Now we need to document the solution and get it production-ready, before progressing from what’s currently just a framework to something of real value.

Next week will be very different: I’m taking most of half term off work but Monday is the bi-annual risual summit, and I’m responsible for the Technology Track again.

Before then, it’s a weekend of kids football and cycling, plus Six Nations and Winter Olympics on TV. So I’m signing off now to (hopefully) watch Wales beat England at Twickenham!

UK Government Protective Marking and the Microsoft Cloud

I recently heard a Consultant from another Microsoft partner talking about storing “IL3” information in Azure. That rang alarm bells with me, because Impact Levels (ILs) haven’t been a “thing” for UK Government data since April 2014. For the record, here’s the official guidance on the UK Government data security classifications and this video explains why the system was changed:

Meanwhile, this one is a good example of what it means in practice:

So, what does that mean for storing data in Azure, Dynamics 365 and Office 365? Basically, information classified OFFICIAL can be stored in the Microsoft Cloud – for more information, refer to the Microsoft Trust Center. And, because OFFICIAL-SENSITIVE is not another classification (it’s merely highlighting information where additional care may be needed), that’s fine too.

I’ve worked with many UK Government organisations (local/regional, and central) and most are looking to the cloud as a means to reduce costs and improve services. The fact that more than 90% of public data is classified OFFICIAL (indeed, that’s the default for anything in Government) is no reason to avoid using the cloud.

Microsoft SQL Server overview

I wrote this post a few months ago… and it crashed my blog. Gone. Needed to be restored from backup…

…hopefully this time I’ll have more luck!

One of the advantages of being in the MVP Reconnect programme is that I occasionally get invited to webcasts that open my eyes to technology I’ve not had a lot to do with previously. For many years, one of the big holes in my knowledge was around Microsoft SQL Server. That was until I saw Brian Kelley (@kbriankelley)’s “Brief overview of SQL Server”. The content’s not restricted, so I thought I’d republish some of it here for others who are getting their head around the major on-premises components of the Microsoft Data Platform.

SQL Server Editions

There are several editions of SQL Server available and these are the key differences (updated for 2017):

  • Express Edition (previously known as MSDE) is a free version, with some limitations around database size, etc.
  • Standard Edition lacks some enterprise features but has high availability and suits many application workloads.
  • Enterprise Edition is the full functionality product (but can be expensive)
  • Developer Edition (not licenced for use in production) offers the full feature set but can also run on a client operating system whereas enterprise will only run on server-based operating systems
  • Web Edition has reduced functionality and is intended for public websites (only available to service providers)
  • Compact Edition is another free version, intended for embedded databases in ASP.NET websites and Windows desktop applications

Although SQL Server is often thought of as an RDBMS product, it’s really a suite of systems, under the SQL Server name. Usually that means the database engine but there are many parts, each of which has a distinct setup (i.e. you don’t need the database service for SQL Server Analysis Services and vice versa).

SQL Server Analysis Services (SSAS)

SSAS (since 2007) is an online analytical and transaction processing (OLAP) tool intended for data warehousing and data mining.

One advantage of OLAP is to run jobs during the night for pre-generated calculations (used for roll-ups – e.g. totals and averages, etc.). It can provide fast results to business users who would otherwise need complex calculations in a transactional system (e.g. sales data based on region, month, quarter, etc. can be done ahead of time).

SSAS is comparable to IBM Cognos or Oracle Essbase (normally packaged with Hyperion for accounting, etc.).

Some SSAS jargon includes:

  • Star schema/snowflake schema – database design differs from transactional design. You can do these things in RDBMS but use SSAS on top.
  • Cubes
  • Dimensions
  • Tabular model
  • Data analysis expressions (DAX) – a language to do things in SSAS

SQL Server Integration Services (SSIS)

SSIS (since 2005) is heavily used for extract, transform and load (ETL) workloads – i.e. to get data from a source, manipulate it and pass it to a destination. It can be used to build a data warehouse, then data marts or to move data between systems. Basically, it’s a back-end batch processing system that performs the data mining.

SSIS is a replacement for Data Transformation Services (DTS). It’s not limited to SQL Server for source/destination so can talk to Oracle, Excel spreadsheets, other ODBC connections, etc.

The drag and drop interface is very powerful with the full functionality and flexibilityof Microsoft.NET behind it.

SSIS is comparable with Informatica (or Clover, etc.).

Some SSIS jargon includes:

  • Packages (whatever is processing, contains all the logic)
  • Tasks (what’s being carried out)
  • Dataflow tasks (how you go from source to destination – could be multiples)
  • Transformation (manipulating data)
  • Business Intelligence Markup Language (BIML)

SQL Server Reporting Services (SSRS)

SSRS was introduced 2005 and became so popular it was ported back to SQL Server 2000!
It is a reporting engine, used to publish reports in-browser. Early versions were built on IIS but since 2008, SSIS has run directly on http.sys.

SSRS can be integrated with SharePoint (for report security based on SharePoint security) or the native, standalone mode is browser-based to look at folders, find reports, and run a report with parameters. Used to print via ActiveX control but now (since 2016) prints to PDF (or opens with a PDF reader).

There are two ways to build reports: Report Builder (a web-side interface for BA-type power user) or Report Designer (a full product for complex designs). There is also a subscription capability so users can subscribe to reports.

SSRS can be compared with IBM Business Objects and Tableau.

SSRS jargon includes:

  • Reports
  • Data sources
  • Datasets
  • ReportServer (API to integrate with other products)
  • Native mode vs. integrated mode (SharePoint)

SQL Server Database Engine

The SQL Server database engine is what most people think of when SQL Server is mentioned.
It is traditionally a relational database management system (RDBMS) although it now contains many other database capabilities. It was originally derived from a Sybase product (until SQL Server 6.5).

SQL Server supports both multiple databases per instance (which can connect and join across) and multiple instances per server (from 2000) – the first is a default instance, then named instances can be created.

SQL Server uses a SQL language variant called T-SQL to interact. A GUI is provided in SQL Server Management Studio but it’s also possible interact via PowerShell.

SQL Server also has a scheduler (the SQL Server Agent), which can alert on success/failure and allows the creation of elaborate scheduling routines with notifications and the ability to run code. It is comparable with IBM DB2, Oracle, PostgreSQL, Sybase, MySQL and MariaDB.

SQL Server 2016 features include:

  • High availability options, including Always On failover clusters; Always On availability groups (which are more flexible because they don’t have to replicate and fail over everything); Database mirroring (one database on multiple systems; deprecated now in favour of availability groups); log shipping.
  • Several encryption options including built-in (certificate, asymmetric keys, symmetric keys); Enterprise Edition also has Transparent Data Encryption (TDE) to encrypt database at rest and stop copies of the database from being loaded elsewhere; connection encryption (SSL/TLS since 2005); Always Encrypted is new for 2016 (transparent to the application and to SQL Server) – data stored in encrypted form within the database.
  • SQL Server and Windows authentication (server or Active Directory). Can have Windows or both, but not just SQL Server-based logins.
  • Replication options to move data between servers.

Other security features include audit objects (who did what?); granular security permissions; login auditing (failed logins are written to the SQL Server Error Log text file and to the application event log); dynamic data masking (depending on who needs to see it – e.g. store social security numbers and only show part of the data; only obfuscation as data is still in clear text); row-level security (to filter rows).

Each new version brings performance enhancements, e.g. columnstore indexes, in-memory OLTP tables, query optimisation.

New Technologies in 2016 include:

  • JSON support. Query and return data in JSON format. Administrators have been able to use SOAP and XML since 2005 but this is now deprecated in favour of JSON (which is popular for RESTful systems).
  • Master data services.
  • Polybase (not to be confused with a clustering solution – it’s about talking to other data sources, e.g. Hadoop, Cloudera and Azure storage, to be expanded to include Oracle, Teradata, Mongo, Spark and more).
  • R Services/R Server (R within the database and also R Server for data science/big data queries).

2017 builds on 2016 to include:

  • Linux and Docker support. Starting with SQL Server 2017, SQL Server is available for either Windows or Linux systems and it’s available as an installable application or for Docker containers.
  • SQL Server R Services has been renamed SQL Server Machine Learning Services, to reflect support for Python in addition to R.

There are many more features in the Microsoft documentation but these are the most significant updates.

But what about the cloud?

This post provided a quick run-down of some of the major on-premises SQL Server components but, just as with Microsoft’s other products, there are cloud alternatives too. I’m planning a follow-up post to cover these so watch this space!

Some tips from my first few weeks with a GoPro Hero action camera

I’ve been interested in having a play with an action camera for a while now. I figure I can get some fun footage on the bikes, as well as ski-ing next winter, and I missed not having a waterproof camera when I was lake-swimming in Switzerland a few weeks ago!

So, when I saw that a contact who had upgraded to the Hero 5 was selling his GoPro Hero 3 Silver Edition, I jumped at the opportunity.

My camera came to me with quite a few accessories and I picked up some more for not too much money at HobbyKing (shipped from China in 3 weeks – don’t pay GoPro prices for things like a tripod mount or a lens cover!).

Whilst getting used to the camera’s controls (oh yes, and opening the waterproof case for the first time), I came across some useful tips on the ‘net… including loads of videos from a guy called Bryn, whose new users guide was useful to make sure I had everything set up as I needed:

Once I had everything set up and a fast 64GB card installed, My first outing on a bike with the GoPro was helmet-mounted. That was OK, but it’s a bit weird having all that weight on your head and also not too handy for working out if the camera is running or not. Since then, I’ve got a bike mount so when my GoPro is mounted on my bike, I have it below the stem, which means technically it’s upside-down:

No worries – the Internet delivered another video telling me how to set the camera up for upside down recording:

One thing to watch out for is the battery life – don’t expect to be filling your memory card on a single battery – but it should last a while. It’s just that a GoPro isn’t going to work as a DashCam or similar (there are actually some good articles on the ‘net as to why you would probably want to use a specialist dashcam anyway – I have a NextBase 402G for that). Anyway, I don’t want to have to edit hours of footage so knowing I can only record a few minutes at a time is good for me (I have hours of recordings on MiniDV digital tape that have been waiting to be transferred to disk for years!).

I did recently use the GoPro to record some presentations at work: great for a wide angle view – but it got pretty warm being plugged into a power source the whole time (so again, a proper video camera would be the right thing to use – and don’t think about using a DSLR or a compact camera – I tried that too and they generally switch off after 20-30 mins to prevent overheating). One thing I found is that each video recorded on the GoPro is chopped into chunks of around 3.55MB (I was recording 1080p). The file naming is worth getting used to.

Each video uses the same number (0001, 0002, etc.) but you’ll find that the first one is named GOPR0001.MP4, the next is GP010001.MP4, then GP020001.MP4, etc. So, when selecting a group of files that relate to the same recording, look carefully at the index numbers (the date and time stamp should help too).

Also, depending on how you import the videos (i.e. copying directly rather than using an application like MacOS Image Capture), you may see some .THM and .LRV files. The GoPro support site explains that these are thumbnail and low-resolution video files respectively.

So, that’s a few things I’ve discovered over the last few weeks and just a little bit of GoPro tinkering. Please leave a comment if you’ve anything more to add!

Seven technology trends to watch 2017-2020

Just over a week ago, risual held its bi-annual summit at the risual HQ in Stafford – the whole company back in the office for a day of learning with a new format: a mini-conference called risual:NXT.

I was given the task of running the technical track – with 6 speakers presenting on a variety of topics covering all of our technical practices: Cloud Infrastructure; Dynamics; Data Platform; Unified Intelligent Communications and Messaging; Business Productivity; and DevOps – but I was also privileged to be asked to present a keynote session on technology trends. Unfortunately, my 35-40 minutes of content had to be squeezed into 22 minutes… so this blog post summarises some of the points I wanted to get across but really didn’t have the time.

1. The cloud was the future once

For all but a very small number of organisations, not using the cloud means falling behind. Customers may argue that they can’t use cloud service because of regulatory or other reasons but that’s rarely the case – even the UK Police have recently been given the green light (the blue light?) to store information in Microsoft’s UK data centres.

Don’t get me wrong – hybrid cloud is more than tactical. It will remain part of the landscape for a while to come… that’s why Microsoft now has Azure Stack to provide a means for customers to run a true private cloud that looks and works like Azure in their own datacentres.

Thankfully, there are fewer and fewer CIOs who don’t see the cloud forming part of their landscape – even if it’s just commodity services like email in Office 365. But we need to think beyond lifting and shifting virtual machines to IaaS and running email in Office 365.

Organisations need to transform their cloud operations because that’s where the benefits are – embrace the productivity tools in Office 365 (no longer just cloud versions of Exchange/Lync/SharePoint but a full collaboration stack) and look to build new solutions around advanced workloads in Azure. Microsoft is way ahead in the PaaS space – machine learning (ML), advanced analytics, the Internet of Things (IoT) – there are so many scenarios for exploiting cloud services that simply wouldn’t be possible on-premises without massive investment.

And for those who still think they can compete with the scale that Microsoft (Amazon and Google) operate at, this video might provide some food for thought…

(and for a similar video from a security perspective…)

2. Data: the fuel of the future

I hate referring to data as “the new oil”. Oil is a finite resource. Data is anything but finite! It is a fuel though…

Data is what provides an economic advantage – there are businesses without data and those with. Data is the business currency of the future. Think about it: Facebook and Google are entirely based on data that’s freely given up by users (remember, if you’re not paying for a service – you are the service). Amazon wouldn’t be where it is without data.

So, thinking about what we do with that data: the 1st wave of the Internet was about connecting computers, 2nd was about people, the 3rd is devices.

Despite what you might read, IoT is not about connected kettles/fridges. It’s not even really about home automation with smart lightbulbs, thermostats and door locks. It’s about gathering information from billions of sensors out there. Then, we take that data and use it to make intelligent decisions and apply them in the real world. Artificial intelligence and machine learning feed on data – they are ying and yang to each other. We use data to train algorithms, then we use the algorithms to process more data.

The Microsoft Data Platform is about analytics and data driving a new wave of insights and opening up possibilities for new ways of working.

James Watt’s 18th Century steam engine led to an industrial revolution. The intelligent cloud is today’s version – moving us to the intelligence revolution.

3 Blockchain

Bitcoin is just one implementation of something known as the Blockchain. In this case as a digital currency.

But Blockchain is not just for monetary transactions – it’s more than that. It can be used for anything transactional. Blockchain is about a distributed ledger. Effectively, it allows parties to trust one another without knowing each other. The ledger is a record of every transaction, signed and tamper-proof.

The magic about Blockchain is that as the chain gets longer so does the entropy and the encryption level – effectively, the more the chain is used, the more secure it gets. That means infinite integrity.

(Read more in Jamie Skella’s “A blockchain explaination your parents could understand”.)

Blockchain is seen as strategic by Microsoft and by the UK government and it’s early days but we will see where people want to talk about integrity and data resilience with integrity. Databases – anything transactional – can be signed with blockchain.

A group of livestock farmers in Arkansas is using blockchain technology so customers can tell where their dinner comes from. They are applying blockchain technology to trace products from ‘farm to fork’ aiming to provide consumers with information about the origin and quality of the meat they buy.

Blockchain is finding new applications in the enterprise and Microsoft has announced the CoCo Framework to improve performance, confidentiality and governance characteristics of enterprise blockchain networks (read more in Simon Bisson’s article for InfoWorld). There’s also Blockchain as a service (in Azure) – and you can find more about Microsoft’s plans by reading up on “Project Bletchley”.

(BTW, Bletchley is a town in Buckinghamshire that’s now absorbed into Milton Keynes. Bletchley Park was the primary location of the UK Government’s wartime code-cracking efforts that are said to have shortened WW2 by around 2 years. Not a bad name for a cryptographic technology, hey?)

4 Into the third dimension

So we’ve had the ability to “print” in 3 dimensions for a while but now 3D is going further.Now we’re taking physical worlds into the virtual world and augmenting with information.

Microsoft doesn’t like the term augmented reality (because it’s being used for silly faces on photos) and they have coined the term mixed reality to describe taking untethered computing devices and creating a seamless overlap between physical and virtual worlds.

To make use of this we need to be able to scan and render 3D images, then move them into a virtual world. 3D is built into next Windows 10 release (the Fall Creators update, due on 17 October 2017). This will bring Paint 3D, a 3D Gallery, View 3D for our phones – so we can scan any object and import to a virtual world. With the adoption rates of new Windows 10 releases then that puts 3D on a market of millions of PCs.

This Christmas will see lots of consumer headsets in the market. Mixed reality will really take off after that. Microsoft is way ahead in the plumbing – all whilst we didn’t notice. They held their Hololens product back to be big in business (so that it wasn’t a solution without a problem). Now it can be applied to field worker scenarios, visualising things before they are built.

To give an example, recently, I had a builder quote for a loft extension at home. He described how the stairs will work and sketched a room layout – but what if I could have visualised it in a headset? Then imagine picking the paint, sofas, furniture, wallpaper, etc.

The video below shows how Ford and Microsoft have worked together to use mixed reality to shorten and improve product development:

5 The new dawn of artificial intelligence

All of the legends of AI are set by sci-fi (Metropolis, 2001 AD, Terminator). But AI is not about killing us all! Humans vs. machines? Deep Blue beating people at Chess, Jeopardy, then Google taking on Go. Heading into the economy and displacing jobs. Automation of business process/economic activity. Mass unemployment?

Let’s take a more optimistic view! It’s not about sentient/thinking machines or giving human rights to machines. That stuff is interesting but we don’t know where consciousness comes from!

AI is a toolbox of high-value tools and techniques. We can apply these to problems and appreciate the fundamental shift from programming machines to machines that learn.

Ai is not about programming logical steps – we can’t do that when we’re recognising images, speech, etc. Instead, our inspiration is biology, neural networks, etc. – using maths to train complex layers of neural networks led to deep learning.

Image recognition was “magic” a few years ago but now it’s part of everyday life. Nvidia’s shares are growing massively due to GPU requirements for deep learning and autonomous vehicles. And Microsoft is democratising AI (in its own applications – with an intelligent cloud, intelligent agents and bots).

NVIDIA Corporation stock price growth fuelled by demand for GPUs

So, about those bots…

A bot is a web app and a conversational user interface. We use them because natural language processing (NLP) and AI are here today. And because messaging apps rule the world. With bots, we can use Human language as a new user interface; bots are the new apps – our digital assistants.

We can employ bots in several scenarios today – including customer service and productivity – and this video is just one example, with Microsoft Cortana built into a consumer product:

The device is similar to Amazon’s popular Echo smart speaker and a skills kit is used to teach Cortana about an app; Ask “skillname to do something”. The beauty of Cortana is that it’s cross-platform so the skill can show up wherever Cortana does. More recently, Amazon and Microsoft have announced Cortana-Alexa integration (meanwhile Siri continues to frustrate…)

AI is about augmentation, not replacement. It’s true that bots may replace humans for many jobs – but new jobs will emerge. And it’s already here. It’s mainstream. We use recommendations for playlists, music, etc. We’re recognising people, emotions, etc. in images. We already use AI every day…

6 From silicon to cells

Every cell has a “programme” – DNA. And researchers have found that they can write code in DNA and control proteins/chemical processes. They can compile code to DNA and execute, creating molecular circuits. Literally programming biology.

This is absolutely amazing. Back when I was an MVP, I got the chance to see Microsoft Research talk about this in Cambridge. It blew my mind. That was in 2010. Now it’s getting closer to reality and Microsoft and the University of Washington have successfully used DNA for storage:

The benefits of DNA are that it’s very dense and it lasts for thousands of years so can always be read. And we’re just storing 0s and 1s – that’s much simpler than what DNA stores in nature.

7 Quantum computing

With massive data storage… the next step is faster computing – that’s where Quantum computing comes in.

I’m a geek and this one is tough to understand… so here’s another video:

Quantum computing is starting to gain momentum. Dominated by maths (quantum mechanics), it requires thinking in equations, not translating into physical things in your head. It has concepts like superposition (multiple states at the same time) and entanglement. Instead of gates being turned on/off it’s about controlling particles with nanotechnology.

A classical 2 bit on-off takes 2 clock cycles. One quantum bit (a Qubit) has multiple states at the same time. It can be used to solve difficult problems (the RSA 2048 challenge problem would take a billion years on a supercomputer but just 100 seconds on a 250-bit quantum computer). This can be applied to encryption and security, health and pharma, energy, biotech, environment, materials and engineering, AI and ML.

There’s a race for quantum computing hardware taking place and China sees this as a massively strategic direction. Meanwhile, the UK is already an academic centre of excellence – now looking to bring quantum computing to market. We’ll have usable devices in 2-3 years (where “usable” means that they won’t be cracking encryption, but will have initial applications in chemistry and biology).

Microsoft Research is leading a consortium called Station Q and, later this year, Microsoft will release a new quantum computing programming language, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Predicting the future?

Amazon, Google and Microsoft each invest over $12bn p.a. on R&D. As demonstrated in the video above, their datacentres are not something that many organisations can afford to build but they will drive down the cost of computing. That drives down the cost for the rest of us to rent cloud services, which means more data, more AI – and the cycle continues.

I’ve shared 7 “technology bets” (and there are others, like the use of Graphene) that I haven’t covered – my list is very much influenced by my work with Microsoft technologies and services. We can’t always predict the future but all of these are real… the only bet is how big they are. Some are mainstream, some are up and coming – and some will literally change the world.

Credit: Thanks to Rob Fraser at Microsoft for the initial inspiration – and to Alun Rogers (@AlunRogers) for helping place some of these themes into context.

Short takes: iPhone broadcasting wrong number; fractions in HTML; Word comment authors

Another collection of things I found on the Internet that might or might not be useful for other people.

SMS and phone calls using the wrong number on an iPhone

In common with most people who “work in IT”, I get called upon for family IT support. In truth, I get called upon a lot less since my trainee geek (aged 12¾) deals with most of that for me! Last weekend though, he was stumped by the problems my Mother-in-law was having with her iPhone.

She’d bought a new phone and changed providers, then ported her number to the new provider. Although calls were reaching her with the correct number on her SIM, SMS and outbound calls were using the temporary number allocated prior to porting her “real” number.

I found the solution via the Giffgaff forums – where essie112mm describes a combination of steps including turning iMessage and Facetime on/off. The crucial part for me was Settings, Phone, My Number – where I needed to edit the number to the one that we wanted to use.

Writing fractions in HTML

In the previous section, I wanted to write ¾ using the correct HTML. As it happens, WordPress has taken our my HTML ¾ and replaced it with a raw ¾ symbol but I found this article by Charles Iliya Krempeaux (@Riever) useful reading for representing less common fractions in HTML.

Microsoft Word removes the author name from comments

I write a lot of documents in my professional life. I review even more for other people – and I use the reviewing tools in Microsoft Word extensively. One “feature” that was frustrating me though was that, every time I saved a file, my comments changed from “Mark Wilson” to “Author”.

My colleague Simon Bilton (@sabrisual) pointed out the fix to me – buried in Word’s options under Trust Center, Trust Center Settings, Privacy Options, Remove personal information from file properties on save (thanks to Stefan Blom in this TechNet forum post).

Remove personal information from file properties on save

It seems that our admins have set this by Group Policy now so I won’t have the problem any more but it’s a useful one to be aware of…