The Windows Network Connection Status Icon (NCSI)

Last night, whilst working in the Premier Inn close to the office, I noticed the browser going to an interesting URI after I connected to the hotel Wi-Fi.  That URI was http://www.msftconnecttest.com/redirect and a little more research tells me it’s used by Windows 10 to detect whether the PC has an Internet connection or not.

The feature is actually the Network Connection Status Icon (NCSI) and, more accurately, the URIs used are:

The URI I saw actually redirects to MSN whereas the ones above return static text to indicate a successful connection.

For those who want to know more, there’s a detailed technical reference on TechNet, which dates back to Windows Vista and an extensive blog post on the Network Connection Status Icon.

Fibre to the community; business hubs; and killing the commute

Our country desperately needs investment in infrastructure yet we can’t afford it, either politically, financially, or environmentally. At the same time, driven by rising house prices and other considerations, people are living ever further from their workplace, with consequential impacts on family life and local communities. So what can we do to redress the balance?

In a word: localisation.

Or, in a few more words: stay at home; cut down travel; and rebuild communities.

For years now, we’ve been hearing (usually from companies selling tools to enable remote working) that teleworking is the future. It is, or at least working remotely for part of the time can be (people still need human contact) but we’re constrained by our communications infrastructure.

Super fast broadband services are typically only available in metropolitan areas, with fibre to the home (FTTH) or even fibre to the cabinet (FTTC) a distant dream for rural communities, even those that are a relatively short distance from major cities.

So why not create business hubs in our small towns and villages – office space for people to work, without having to travel for miles, taking up space on a train or a road, and polluting our environment?

Local councils (for example) can provide infrastructure – such as desks and Internet access (a connection to one central point may be more cost effective than wiring up every home) – and employees from a variety of companies have the benefit of a space to network, to share ideas, to work, without the need to travel long distances or the isolation and poor communications links (or family interruptions) encountered at home.

The location might be a library, a community centre, a coffee shop, the village pub (which desperately needs to diversify in order to survive) – all that’s really needed is a decent Internet connection, some desks, maybe meeting rooms and basic facilities.

Meanwhile, instead of spending our money in the coffee shops of London (or wherever), local businesses stand to benefit from increased trade (fewer commuters means more people in the town). Local Post Offices may become economically viable again, shops get new trade and new businesses spring up to serve the community that was previously commuting to the city.

Cross-pollination in the workplace (conversations at the hub) may lead to new relationships, partnerships with other companies and generally improved collaboration.

Families benefit too – with parents working closer to home, there’s time to see their children (instead of saying goodnight over the phone on a long commute after another late night in the office); and, generally, there’s an improvement in social well being and community involvement.

The benefits to the community and to society at large are potentially huge, but it needs someone (which is why I suggest local government, although central government support may be required) to kick-start the initiative.

If foundations like Mozilla can create Mozilla Spaces in our cities, why can’t we create spaces in our small towns and villages? Spaces to network. Spaces to work. Spaces to collaborate. Spaces to invigorate. To invigorate individuals and to rebuild our communities.

It all seems so logical, so what have I missed?

Is technology at the heart of business, or is it simply an enabler?

I saw a video from Cisco this morning, and found it quite inspirational. The fact it’s from Cisco isn’t really relevant (indeed, if I showed it without the last few seconds you woudn’t know) but it’s a great example of how IT is shaping the world that we live in – or, more precisely, how the world is shaping the direction that IT is taking:

In case you can’t see the video above, here are some of the key statistics it contains:

  • Humans created more data in 2009 alone than in all previous years combined.
  • Over the last 15 years, network speeds have increased 18 million times.
  • Information is moving to the cloud; 8/10 IT Managers plan to use cloud computing within the next 3 years.
  • By 2015, tools and automation will eliminate 25% of IT labour hours.
  • We’re using multiple devices: by 2015 there will be nearly one mobile-connected device for every person on earth;
  • 2/3 of employees believe they should be able to access information using company-issued devices at any time, at any location;
  • 60% believe they don’t need to be in an office to be productive;
  • This is creating entirely new forms of collaboration.
  • “The real impact of the information revolution isn’t about information management but on relationships; the ability to allow not dozens, or hundreds, but thousands of people to meaningfully interact” [Dr Michael Schrage, MIT].
  • By 2015 companies will generate 50% of web sales via their social presence and mobile applications.
  • Social business software will become a $5bn business by 2013.
  • Who sits at the centre of all this? Who is managing these exponential shifts? The CIO.

Some impressive numbers here – and we might expect to see many of these figures cited by a company selling social collaboration software and networking equipment but they are a good indication of the way things are heading.  I would place more emphasis on empowered employees and customers redefining IT provisioning (BYO, for example); on everything as a service (XaaS) changing the IT delivery model; on the need for a new architecture to manage the “app Internet”; and on big data – which will be a key theme for the next few years.

Whatever the technologies underpinning the solution – the overall direction is for IT to provide business services that add value and enhance business agility rather than simply being part of “the cost of doing business”.

I think Cisco’s video does a rather good job of illustrating the change that is occurring but the real benefits come when we are able to use technology as an enabler for business services that create new opportunities, rather than responding to existing pressures.

I’d love to hear what our customers, partners and competitors think – is technology at the heart of the digital revolution, or is it simply an enabler for new business services?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was written with assistance from Ian Mitchell.]

As Amazon fuels the fire, where are the networks to deliver our content?

Last week saw Amazon’s announcement of the Kindle Fire – a new tablet computer which marks the bookstore-turned-online-warehouse-turned-cloud-infrastructure-provider‘s latest skirmish into the world of content provision and consumption. It’s not going to be available in the UK and Ireland for a while (some of the supporting services don’t yet exist here) but many technology commentators have drawn comparisons with the Apple iPad – the current tablet market leader (by a country mile). And, whilst there are comparisons (both are tablets, both rely on content) – they really do compete in different sectors.

Even as an iPad user, I can see the attractiveness of the Kindle Fire: If Amazon is able to execute its strategy (and all signs suggest they are), then they can segment the tablet market leaving Apple at the premium end and shifting huge volumes of low price devices to non-geeks. Note how Amazon has maintained a range of lower-price eInk devices too? That’s all about preserving and growing the existing user base – people who like reading, like the convenience of eBooks but who are not driven by technology.

At this point you’re probably starting to wonder why I’m writing this on a blog from a provider of enterprise IT systems and services. How is this really relevant to the enterprise? Actually, I think it is really relevant. I’ve written about the consumerisation of enterprise IT over and over (on this blog and elsewhere) but, all of a sudden, we’re not talking about a £650 iPad purchase (a major commitment for most people) but a sub-£200 tablet (assuming the Fire makes it to the UK). And that could well mark a tipping point where Android, previously largely confined to smartphones, is used to access other enterprise services.

I can think of at least one former CIO for whom that is a concern: the variety of Android platforms and the potential for malware is a significant security risk. But we can’t stick our heads in the sand, or ban Android devices – we need to find a way forward that is truly device and operating system agnostic (and I think that’s best saved as a topic for another blog post).

What the Apple iPad, Amazon Kindle Fire, and plethora of Google Android-powered devices have in common is their reliance on content. Apple and Amazon both have a content-driven strategy (Google is working on one) but how does that content reach us? Over the Internet.

And there stands a problem… outside major cities, broadband provision is still best described as patchy. There are efforts to improve this (including, but not exclusively, those which Fujitsu is taking part in) but 3G and  4G mobile networks are a part of the picture.

UK businesses and consumers won’t be able to fully benefit from new cloud-based tools until the UK has a nationwide reliable high speed mobile data network and a new paper, published today by the Open Digital Policy Organisation suggests that the UK is at least 2 years behind major countries in its 4G rollout plans. Aside from the potential cost to businesses of £732m a year,  we’re all consumers, downloading content to our Kindles, iPads, watching TV catch-up services like BBC iPlayer and 4oD, as well as video content from YouTube, Vimeo, etc. Add to that the networks of sensors that will drive the future Internet – and then consider that many businesses are starting to question the need for expensive wide area network connections when inexpensive public options are available… I think you get my point…

We live in a content-driven society – more and more so by the day… sadly it seems that the “information superhighway” is suffering from congestion and that may well stifle our progress.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Could this be the ultimate unified messaging client?

Much has been made of the slow death of email and the rise of enterprise social software so I was interested to read a recent paper in which Benno Zollner, Fujitsu’s global CIO, commented on the need to balance email usage with other communications mechanisms.

In the paper, Benno posits a view that we’re entering not just a post-PC era but a post-email era where we use a plethora of devices and protocols. This is driven by a convergence of voice and data (not just on smartphones, but on the “desktop” too – Microsoft’s acquisition of Skype shows how seriously they are taking this) but also the enterprise social software that’s extending our traditional collaboration platforms to offer what was once referred to as a “web 2.0” experience, only inside the corporate network.  Actually, I’m slightly uncomfortable with that last sentence – not just because as I find the terms “web 2.0” and “enterprise 2.0” to be cringe-worthy but, also, the concept of the corporate network is becoming less and less relevant as we transact more and more business in the cloud, using the mobile Internet, Wi-Fi hotspots and home broadband. Even so, it illustrates my point, that social networking is very much a part of the modern business environment, alongside traditional communications mechanisms including the telephone and email.

A few months ago, I wrote about the need to prioritise communications but I can see us taking a step further in the not-too-distant future.  Why do I need an email client (Microsoft Outlook), multiple instant messaging/presence/voice over IP (VoIP) clients (Microsoft Office Communicator/Lync/Skype) a Twitter client (TweetDeck), Enterprise social software (Microsoft SharePoint/Newsgator Social Sites/Salesforce Chatter) and a combination of mobile and desk-based phones (don’t forget SMS on that mobile too!)? Plenty has been made of the ability to use VoIP to ring several phones simultaneously, to call the phone that best matches my presence or to divert the call to a unified messaging inbox but why limit this to telephony?

I can envisage a time when we each have a consolidated communications client – one that recognises who we’re trying to communicate with and picks the appropriate channel to contact them.  If I’m sending a message to my wife and she’s at her desk, then email is fine but if I can tell she’s on the school run then why not route it to her mobile phone by SMS?  Similarly, advanced presence information can be used to route communications over a variety of channels to favour that which each of my contacts tends to use in a given scenario.  Perhaps the software knows that a contact is not available via instant messaging but is signed in to Twitter and can be contacted with a direct message.  Maybe I can receive a précis of an urgent report on my smartphone but the full version is available at my desk. The possibilities are vast but the main point is that the sender shouldn’t need to pick and choose the medium; instead, software can take into account the preferences of the recipient and route the communication accordingly (taking into account that some transport mechanisms may not guarantee delivery). Could this be the ultimate unified messaging client?

Email isn’t dead – but soon we won’t care whether our messages are sent via SMTP, SIP, SMS or semaphore – just as long as they arrive in a manner that ensures an efficient communication process and lets us focus on the task at hand, rather than spending the day working our way through our inboxes.

This post originally appeared on the Fujitsu UK and Ireland CTO Blog and is based on a concept proposed by Ian Mitchell.

Does Microsoft Kinect herald the start of a sensor revolution?

Last week, Microsoft officially announced a software development kit for the Kinect sensor. Whilst there’s no commercial licensing model yet, that sounds like it’s the start of a journey to official support of gesture-based interaction with Windows PCs.

There’s little doubt that Kinect, Microsoft’s natural user interface for the Xbox game console, has been phenomenally successful. It’s even been recognised as the fastest-selling consumer device on record by Guinness World Records. I even bought one for my family (and I’m not really a gamer) – but before we go predicting the potential business uses for this technology, it’s probably worth stopping and taking stock. Isn’t this really just another technology fad?

Kinect is not the first new user interface for a computer – I’ve written so much about touch-screen interaction recently that even I’m bored of hearing about tablets! We can also interact with our computers using speech if we choose to – and the keyboard and mouse are still hanging on in there too (in a variety of forms). All of these technologies sound great, but they have to be applied at the right time: my iPad’s touch screen is great for flicking through photos, but an external keyboard is better for composing text; Kinect is a fantastic way to interact with games but, frankly, it’s pretty poor as a navigational tool.

What we’re really seeing here is a proliferation of sensors. Keyboard, mouse, trackpad, microphone, and camera(s), GPS, compass, heart monitor – the list goes on. Kinect is really just an advanced, and very consumable, sensor.

Interestingly sensors typically start out as separate peripherals and, over time, they become embedded into devices. The mouse and keyboard morphed into a trackpad and a (smaller) keyboard. Microphones and speakers were once external but are now built in to our personal computers. Our smartphones contain a wealth of sensors including GPS, cameras and more. Will we see Kinect built into PCs? Quite possibly – after all it’s really a couple of depth sensors and a webcam!

What’s really exciting is not Kinect per se but what it represents: a sensor revolution. Much has been written about the Internet of Things but imagine a dynamic network of sensors where the nodes can automatically handle re-routing of messages based on an awareness of the other nodes. Such networks could be quickly and easily deployed (perhaps even dropped from a plane) and would be highly resilient to accidental or deliberate damage because of their “self-healing” properties. Another example of sensor use could be in an agricultural scenario with sensors automatically monitoring the state of the soil, moisture, etc. and applying nutrients or water. We’re used to hearing about RFID tags in retail and logistics but those really are just the tip of the iceberg.

Exciting times…

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was jointly authored with Ian Mitchell.]

IPv6 switchover – what should CIOs do (should they even care)?

It’s not often that something as mundane as a communications protocol hits the news but last week’s exhaustion of Internet Protocol (IP) addresses has been widely covered by the UK and Irish media. Some are likening the “IPocalypse” to the Year 2000 bug. Others say it’s a non-issue. So what do CIOs need to consider in order to avoid being presented with an unexpected bill for urgent network upgrades?

Focus have produced an infographic which explains the need for an IPv6 migration but, to summarise the main points:

  • The existing Internet address scheme is based on 4 billion internet protocol (IPv4) addresses, allocated in blocks to Regional Internet Registries (RIR) and eventually to individual Internet Service Providers (ISP).
  • A new, and largely incompatible version of the Internet Protocol (IPv6) allows for massive growth in the number of connected devices, with 340 undecillion (2^128) addresses.
  • All of the IPv4 addresses have now been allocated to the RIRs and at some point in the coming months, the availability of IPv4 addresses will dry up.
  • Even though there are huge numbers of unused addresses, they have been already been allocated to companies and academic institutions. Some have returned excess addresses voluntarily; others have not.

The important thing to remember is that the non-availability of IPv4 addresses doesn’t mean that the Internet will suddenly stop working. Essentially, new infrastructure will be built on IPv6 and we’re just entering an extended period of transition. Indeed, in Asia (especially Japan and China), IPv6 adoption is much more mature than in Europe and America.

It’s also worth noting that there are a range of technologies that mitigate the requirement for a full migration to IPv6 including Network Address Translation (NAT) and tunnels that allow hybrid networks to be created over the same physical infrastructure. Indeed, modern operating systems enable IPv6 by default so many organisations are already running IPv6 on their networks – but, whilst there are a number of security, performance and scalability improvements in IPv6, there can be negative impacts on security too if implemented badly.

Network providers are actively deploying IPv6 (as are some large organisations) but it’s likely to be another couple of years before many UK and Ireland’s enterprises consider wide-spread deployment. Ironically, the network side is relatively straightforward and the challenge is with the hardware appliances and applications. The implications for a 100% replacement are massive, however a hybrid approach is workable and will be the way IPv6 is deployed in the enterprise for many years to come.

So, should CIOs worry about IPv6? Well, once the last IPv4 addresses are allocated, any newly formed organisation, or those that require additional address space, will only be accessible over the new protocol. Even so, it will be a gradual transition and the key to success is planning, even if implementation is deferred for a while:

“The move to IPv6 will take a long time – ten years plus, with hybrid networks being the reality in the interim. We are already seeing large scale adoption across the globe, particularly across Asia. Telecommunication providers have deployed backbones and this adoption is growing, enterprise customers will follow. Enterprises need to carefully consider migrations: not all devices in the network can support IPv6 today; it is not uncommon for developers to have ‘hard-coded’ IPv4 addresses and fields in applications; and there are also security implications with how hybrid network are deployed, with the potential to bypass security and firewall policies if not deployed correctly.” [John Keegan, Chief Technology Officer, Fujitsu UK and Ireland Network Solutions Division]

As for whether IPv6 is the new Y2K? I guess it is in the sense that it’s something that’s generating a lot of noise and is likely to result in a lot of work for IT departments but, ultimately it’s unlikely to result in a total infrastructure collapse.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was written with assistance from John Keegan.]