Seven technology trends to watch 2017-2020

Just over a week ago, risual held its bi-annual summit at the risual HQ in Stafford – the whole company back in the office for a day of learning with a new format: a mini-conference called risual:NXT.

I was given the task of running the technical track – with 6 speakers presenting on a variety of topics covering all of our technical practices: Cloud Infrastructure; Dynamics; Data Platform; Unified Intelligent Communications and Messaging; Business Productivity; and DevOps – but I was also privileged to be asked to present a keynote session on technology trends. Unfortunately, my 35-40 minutes of content had to be squeezed into 22 minutes… so this blog post summarises some of the points I wanted to get across but really didn’t have the time.

1. The cloud was the future once

For all but a very small number of organisations, not using the cloud means falling behind. Customers may argue that they can’t use cloud service because of regulatory or other reasons but that’s rarely the case – even the UK Police have recently been given the green light (the blue light?) to store information in Microsoft’s UK data centres.

Don’t get me wrong – hybrid cloud is more than tactical. It will remain part of the landscape for a while to come… that’s why Microsoft now has Azure Stack to provide a means for customers to run a true private cloud that looks and works like Azure in their own datacentres.

Thankfully, there are fewer and fewer CIOs who don’t see the cloud forming part of their landscape – even if it’s just commodity services like email in Office 365. But we need to think beyond lifting and shifting virtual machines to IaaS and running email in Office 365.

Organisations need to transform their cloud operations because that’s where the benefits are – embrace the productivity tools in Office 365 (no longer just cloud versions of Exchange/Lync/SharePoint but a full collaboration stack) and look to build new solutions around advanced workloads in Azure. Microsoft is way ahead in the PaaS space – machine learning (ML), advanced analytics, the Internet of Things (IoT) – there are so many scenarios for exploiting cloud services that simply wouldn’t be possible on-premises without massive investment.

And for those who still they can compete with the scale that Microsoft (Amazon and Google) operate at, this video might provide some food for thought…

(and for a similar video from a security perspective…)

2. Data: the fuel of the future

I hate referring to data as “the new oil”. Oil is a finite resource. Data is anything but finite! It is a fuel though…

Data is what provides an economic advantage – there are businesses without data and those with. Data is the business currency of the future. Think about it: Facebook and Google are entirely based on data that’s freely given up by users (remember, if you’re not paying for a service – you are the service). Amazon wouldn’t be where it is without data.

So, thinking about what we do with that data: the 1st wave of the Internet was about connecting computers, 2nd was about people, the 3rd is devices.

Despite what you might read, IoT is not about connected kettles/fridges. It’s not even really about home automation with smart lightbulbs, thermostats and door locks. It’s about gathering information from billions of sensors out there. Then, we take that data and use it to make intelligent decisions and apply them in the real world. Artificial intelligence and machine learning feed on data – they are ying and yang to each other. We use data to train algorithms, then we use the algorithms to process more data.

The Microsoft Data Platform is about analytics and data driving a new wave of insights and opening up possibilities for new ways of working.

James Watt’s 18th Century steam engine led to an industrial revolution. The intelligent cloud is today’s version – moving us to the intelligence revolution.

3 Blockchain

Bitcoin is just one implementation of something known as the Blockchain. In this case as a digital currency.

But Blockchain is not just for monetary transactions – it’s more than that. It can be used for anything transactional. Blockchain is about a distributed ledger. Effectively, it allows parties to trust one another without knowing each other. The ledger is a record of every transaction, signed and tamper-proof.

The magic about Blockchain is that as the chain gets longer so does the entropy and the encryption level – effectively, the more the chain is used, the more secure it gets. That means infinite integrity.

(Read more in Jamie Skella’s “A blockchain explaination your parents could understand”.)

Blockchain is seen as strategic by Microsoft and by the UK government and it’s early days but we will see where people want to talk about integrity and data resilience with integrity. Databases – anything transactional – can be signed with blockchain.

A group of livestock farmers in Arkansas is using blockchain technology so customers can tell where their dinner comes from. They are applying blockchain technology to trace products from ‘farm to fork’ aiming to provide consumers with information about the origin and quality of the meat they buy.

Blockchain is finding new applications in the enterprise and Microsoft has announced the CoCo Framework to improve performance, confidentiality and governance characteristics of enterprise blockchain networks (read more in Simon Bisson’s article for InfoWorld). There’s also Blockchain as a service (in Azure) – and you can find more about Microsoft’s plans by reading up on “Project Bletchley”.

(BTW, Bletchley is a town in Buckinghamshire that’s now absorbed into Milton Keynes. Bletchley Park was the primary location of the UK Government’s wartime code-cracking efforts that are said to have shortened WW2 by around 2 years. Not a bad name for a cryptographic technology, hey?)

4 Into the third dimension

So we’ve had the ability to “print” in 3 dimensions for a while but now 3D is going further.Now we’re taking physical worlds into the virtual world and augmenting with information.

Microsoft doesn’t like the term augmented reality (because it’s being used for silly faces on photos) and they have coined the term mixed reality to describe taking untethered computing devices and creating a seamless overlap between physical and virtual worlds.

To make use of this we need to be able to scan and render 3D images, then move them into a virtual world. 3D is built into next Windows 10 release (the Fall Creators update, due on 17 October 2017). This will bring Paint 3D, a 3D Gallery, View 3D for our phones – so we can scan any object and import to a virtual world. With the adoption rates of new Windows 10 releases then that puts 3D on a market of millions of PCs.

This Christmas will see lots of consumer headsets in the market. Mixed reality will really take off after that. Microsoft is way ahead in the plumbing – all whilst we didn’t notice. They held their Hololens product back to be big in business (so that it wasn’t a solution without a problem). Now it can be applied to field worker scenarios, visualising things before they are built.

To give an example, recently, I had a builder quote for a loft extension at home. He described how the stairs will work and sketched a room layout – but what if I could have visualised it in a headset? Then imagine picking the paint, sofas, furniture, wallpaper, etc.

The video below shows how Ford and Microsoft have worked together to use mixed reality to shorten and improve product development:

5 The new dawn of artificial intelligence

All of the legends of AI are set by sci-fi (Metropolis, 2001 AD, Terminator). But AI is not about killing us all! Humans vs. machines? Deep Blue beating people at Chess, Jeopardy, then Google taking on Go. Heading into the economy and displacing jobs. Automation of business process/economic activity. Mass unemployment?

Let’s take a more optimistic view! It’s not about sentient/thinking machines or giving human rights to machines. That stuff is interesting but we don’t know where consciousness comes from!

AI is a toolbox of high-value tools and techniques. We can apply these to problems and appreciate the fundamental shift from programming machines to machines that learn.

Ai is not about programming logical steps – we can’t do that when we’re recognising images, speech, etc. Instead, our inspiration is biology, neural networks, etc. – using maths to train complex layers of neural networks led to deep learning.

Image recognition was “magic” a few years ago but now it’s part of everyday life. Nvidia’s shares are growing massively due to GPU requirements for deep learning and autonomous vehicles. And Microsoft is democratising AI (in its own applications – with an intelligent cloud, intelligent agents and bots).

NVIDIA Corporation stock price growth fuelled by demand for GPUs

So, about those bots…

A bot is a web app and a conversational user interface. We use them because natural language processing (NLP) and AI are here today. And because messaging apps rule the world. With bots, we can use Human language as a new user interface; bots are the new apps – our digital assistants.

We can employ bots in several scenarios today – including customer service and productivity – and this video is just one example, with Microsoft Cortana built into a consumer product:

The device is similar to Amazon’s popular Echo smart speaker and a skills kit is used to teach Cortana about an app; Ask “skillname to do something”. The beauty of Cortana is that it’s cross-platform so the skill can show up wherever Cortana does. More recently, Amazon and Microsoft have announced Cortana-Alexa integration (meanwhile Siri continues to frustrate…)

AI is about augmentation, not replacement. It’s true that bots may replace humans for many jobs – but new jobs will emerge. And it’s already here. It’s mainstream. We use recommendations for playlists, music, etc. We’re recognising people, emotions, etc. in images. We already use AI every day…

6 From silicon to cells

Every cell has a “programme” – DNA. And researchers have found that they can write code in DNA and control proteins/chemical processes. They can compile code to DNA and execute, creating molecular circuits. Literally programming biology.

This is absolutely amazing. Back when I was an MVP, I got the chance to see Microsoft Research talk about this in Cambridge. It blew my mind. That was in 2010. Now it’s getting closer to reality and Microsoft and the University of Washington have successfully used DNA for storage:

The benefits of DNA are that it’s very dense and it lasts for thousands of years so can always be read. And we’re just storing 0s and 1s – that’s much simpler than what DNA stores in nature.

7 Quantum computing

With massive data storage… the next step is faster computing – that’s where Quantum computing comes in.

I’m a geek and this one is tough to understand… so here’s another video:

Quantum computing is starting to gain momentum. Dominated by maths (quantum mechanics), it requires thinking in equations, not translating into physical things in your head. It has concepts like superposition (multiple states at the same time) and entanglement. Instead of gates being turned on/off it’s about controlling particles with nanotechnology.

A classical 2 bit on-off takes 2 clock cycles. One quantum bit (a Qubit) has multiple states at the same time. It can be used to solve difficult problems (the RSA 2048 challenge problem would take a billion years on a supercomputer but just 100 seconds on a 250-bit quantum computer). This can be applied to encryption and security, health and pharma, energy, biotech, environment, materials and engineering, AI and ML.

There’s a race for quantum computing hardware taking place and China sees this as a massively strategic direction. Meanwhile, the UK is already an academic centre of excellence – now looking to bring quantum computing to market. We’ll have usable devices in 2-3 years (where “usable” means that they won’t be cracking encryption, but will have initial applications in chemistry and biology).

Microsoft Research is leading a consortium called Station Q and, later this year, Microsoft will release a new quantum computing programming language, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Predicting the future?

Amazon, Google and Microsoft each invest over $12bn p.a. on R&D. As demonstrated in the video above, their datacentres are not something that many organisations can afford to build but they will drive down the cost of computing. That drives down the cost for the rest of us to rent cloud services, which means more data, more AI – and the cycle continues.

I’ve shared 7 “technology bets” (and there are others, like the use of Graphene) that I haven’t covered – my list is very much influenced by my work with Microsoft technologies and services. We can’t always predict the future but all of these are real… the only bet is how big they are. Some are mainstream, some are up and coming – and some will literally change the world.

Credit: Thanks to Rob Fraser at Microsoft for the initial inspiration – and to Alun Rogers (@AlunRogers) for helping place some of these themes into context.

The annotated world – the future of geospatial technology? (@EdParsons at #DigitalSurrey)

Tonight’s Digital Surrey was, as usual, a huge success with a great speaker (Google’s @EdParsons) in a fantastic venue (Farnham Castle).  Ed spoke about the future of geospatial data – about annotating our world to enhance the value that we can bring from mapping tools today but, before he spoke of the future, he took a look at how we got to where we are.

What is geospatial information? And how did we get to where we are today?

Geospatial information is very visual, which makes it powerful for telling stories and one of the most famous and powerful images is that of the Earth viewed from space – the “blue marble”. This emotive image has been used many times but has only been personally witnessed by around 20 people, starting with the Apollo 8 crew, 250000 miles from home, looking at their own planet. We see this image with tools like Google Earth, which allows us to explore the planet and look at humankind’s activities. Indeed about 1 billion people use Google Maps/Google Earth every week – that’s about a third of the Internet population, roughly equivalent to Facebook and Twitter combined [just imagine how successful Google would be if they were all Google+ users…]. Using that metric, we can say that geospatial data is now pervasive – a huge shift over the last 10 years as it has become more accessible (although much of the technology has been around longer).

The annotated world is about going beyond the image and pulling out info otherwise invisible information, so, in a digital sense, it’s now possible to have map of 1:1 scale or even beyond. For example, in Google Maps we can look at StreetView and even see annotations of buildings. This can be augmented with further information (e.g restrictions in the directions in which we can drive, details about local businesses) to provide actionable insight. Google also harvests information from the web to create place pages (something that could be considered ethically dubious, as it draws people away from the websites of the businesses involved) but it can also provide additional information from image recognition – for example identifying the locations of public wastebins or adding details of parking restrictions (literally from text recognition on road signs). The key to the annotated web is collating and presenting information in a way that’s straightforward and easy to use.

Using other tools in the ecosystem, mobile applications can be used to easily review a business and post it via Google+ (so that it appears on the place page); or Google MapMaker may be used by local experts to add content to the map (subject to moderation – and the service is not currently available in the UK…).

So, that’s where we are today… we’re getting more and more content online, but what about the next 10 years?

A virtual (annotated) world

Google and others are building a virtual world in three dimensions. In the past, Google Earth pulled data from many sets (e.g. building models, terrain data, etc.) but future 3D images will be based on photographs (just as, apparently, Nokia have done for a while). We’ll also see 3D data being using to navigate inside buildings as well as outside. In one example, Google is working with John Lewis, who have recently installed Wi-Fi in their stores – to use this to determine a user’s location determination and combine this with maps to navigate the store. The system is accurate to about 2-3 metres [and sounds similar to Tesco’s “in store sat-nav” trial] and apparently it’s also available in London railway stations, the British Museum, etc.

Father Ted would not have got lost in the lingerie department if he had Google's mapping in @! says @ #DigitalSurrey
@markwilsonit
Mark Wilson

Ed made the point that the future is not driven by paper-based cartography, although there were plenty of issues taken with this in the Q&A later, highlighting that we still use ancient maps today, and that our digital archives are not likely to last that long.

Moving on, Ed highlighted that Google now generates map tiles on the fly (it used to take 6 weeks to rebuild the map) and new presentation technologies allow for client-side rendering of buildings – for example St Pauls Cathedral, in London. With services such as Google Now (on Android), contextual info may be provided, driven by location and personality

With Google’s Project Glass, that becomes even more immersive with augmented reality driven by the annotated world:

Although someone also mentioned to me the parody which also raises some good points:

Seriously, Project Glass makes Apple’s Siri look way behind the curve – and for those who consider the glasses to be a little uncool, I would expect them to become much more “normal” over time – built into a normal pair of shades, or even into prescription glasses… certainly no more silly than those Bluetooth earpieces the we used to use!

Of course, there are privacy implications to overcome but, consider what people share today on Facebook (or wherever) – people will share information when they see value in it.

Big data, crowdsourcing 2.0 and linked data

At this point, Ed’s presentation moved on to talk about big data. I’ve spent most of this week co-writing a book on this topic (I’ll post a link when it’s published) and nearly flipped when I heard the normal big data marketing rhetoric (the 3 Vs)  being churned out. Putting aside the hype, Google should know quite a bit about big data (Google’s search engine is a great example and the company has done a lot of work in this area) and the annotated world has to address many of the big data challenges including:

  • Data integration.
  • Data transformation.
  • Near-real-time analysis using rules to process data and take appropriate action (complex event processing).
  • Semantic analysis.
  • Historical analysis.
  • Search.
  • Data storage.
  • Visualisation.
  • Data access interfaces.

Moving back to Ed’s talk, what he refers to as “Crowdsourcing 2.0” is certainly an interesting concept. Citing Vint Cerf (Internet pioneer and Google employee), Ed said that there are an estimated 35bn devices connected to the Internet – and our smartphones are great examples, crammed full of sensors. These sensors can be used to provide real-time information for the annotated world: average journey times based on GPS data, for example; or even weather data if future smartphones were to contain a barometer.

Linked data is another topic worthy of note, which, at its most fundamental level is about making the web more interconnected. There’s a lot of work been done into ontologies, categorising content, etc. [Plug: I co-wrote a white paper on the topic earlier this year] but Google, Yahoo, Microsoft and others are supporting schema.org as a collection of microformats, which are tags that websites can use to mark up content in a way that’s recognised by major search providers. For example, a tag like <span itemprop="addresscountry">Spain</span> might be used to indicate that Spain is a country with further tags to show that Barcelona is a city, and that Noucamp is a place to visit.

Ed’s final thoughts

Summing up, Ed reiterated that paper maps are dead and that they will be replaced with more personalised information (of which, location is a component that provides content). However, if we want the advantages of this, we need to share information – with those organisations that we trust and where we know what will happen with that info.

Mark’s final thoughts

The annotated world is exciting and has stacks of potential if we can overcome one critical stumbing point that Ed highliughted (and I tweeted):

In order to create a more useful, personal, contextual web, organisations need to gain our trust to share our information #DigitalSurrey
@markwilsonit
Mark Wilson

Unfortunately, there are many who will not trust Google – and I find it interesting that Google is an advocate of consuming open data to add value to its products but I see very little being put back in terms of data sets for others to use. Google’s argument is that it spent a lot of money gathering and processing that data; however it could also be argued that Google gets a lot for free and maybe there is a greater benefit to society in freely sharing that information in a non-proprietary format (rather than relying on the use of Google tools). There are also ethical concerns with Google’s gathering of Wi-Fi data, scraping website content and other such issues but I expect to see a “happy medium” found, somewhere between “Don’t Be Evil” and “But we are a business after all”…

Thanks as always to everyone involved in arranging and hosting tonight’s event – and to Ed Parsons for an enlightening talk!

Last Orders at The Fantastic Tavern (#TFTLondon)

About a year ago, I wrote about a fantastic concept called The Fantastic Tavern (TFT), started by Matt Bagwell (@mattbagwell) of EMC Consulting (ex-Conchango – where I also have some history). Since then I’ve been to a few more TFTs (and written about them here) and they’ve got bigger, and bigger. What was a few people in a pub is now a major logistical challenge and Matt’s decided to call it a day. But boy did it go out with a bang?!

Last night’s TFT was at Ravensbourne (@RavensbourneUK) – a fantastic mixture of education and business innovation hub on London’s Greenwich peninsula. I was blown away by what Chris Thompson and the team at Ravensbourne have achieved, so I’ll write about that another day. Suffice to say, I wish my university had worked like that…

Last night’s topic was 2012 trends. Personally, I thought the Top Gear-style cool wall (“sooo last year, tepid, cool, sub-zero”) was way off the mark (in terms of placing the trends) but that doesn’t really matter – there were some great pitches from the Ravensbourne students and other invited speakers – more than I can do justice to in a single blog post so I’ll come back and edit this later as the presentations go online (assuming that they will!)

The evening was introduced by Mike Short, VP of Innovation and R&D at O2/Telefonica who also sits on the board of governors at Ravensbourne and so is intimately involved in taking an institution with its rooms in Bromley College of Art (of David Bowie fame) from Chiselhurst to provide art, design, fashion, Internet and multimedia education on Greenwich Peninsular, next to the most visited entertainment venue in the world (The O2 – or North Greenwich Arena). Mike spoke about O2’s plans for an new business incubator project that O2 is bringing to London in the next 3 months as O2 looks at taking the world’s 6bn mobile device subscribers (not just phones, but broadband, payment systems, etc.) to connect education, healthcare, transport and more. In an industry that’s barely 25 years old, by the end of the year there will be more devices than people (the UK passed this point in 2006) and the market is expected to grow to more than 20bn customers by 2020.

Matt then spoke about the omni-channel world in which we live (beyond multi-channel) – simultaneously interacting on all channels and fuelling a desire “to do things faster”.

Moving on to the 2012 trends, we saw:

  • A. Craddock talking about smart tags – RFID and NFC tokens that can interact with our mobile devices and change their behaviour (e.g. switch to/from silent mode).  These can be used to simplify our daily routine to simply enable/disable functionality, share information, make payments, etc. but we also need to consider privacy (location tracking, etc. – opt in/out), openness (may be a benefit for some), ecology (printable tags using biodegradable materials) and device functionality (i.e. will they work with all phones – or just a subset of smartphones).
  • Riccie Audrie-Janus (@_riccie) talking about how, in order to make good use of technology, we need to look at the people element first.  I was unconvinced – successful technology implementation is about people, process and technology and I don’t think it matters that kids don’t understand the significance of a floppy disk icon when saving a document – but she had some interesting points to make about our need to adapt to ever-more-rapidly developing technology as we progress towards an ever-more complex world where computing and biology combine.
  • @asenasen speaking about using DIY healthcare to help focus resources and address issues of population growth, economics and cost. Technology can’t replace surgeons but it can help people make better healthcare decisions with examples including: WebMD for self-diagnosis; PatientsLikeMe providing a social network; apps to interact with our environment and translate into health benefits (e.g. Daily Burn); peripheral devices like FitBit [Nike+, etc.] that interact with apps and present challenges. It’s not just in the consumer space either with Airstrip Technologies creating apps for healthcare professionals. Meanwhile, in the developing world SMS can be used (ChildCount), whilst in Japan new toilets are being developed that can, erhum, analyse our “output”.  Technology has the potential to transform personal health and enable the smart distribution of healthcare.
  • Matt Fox (@mattrfox) talked about 2012 becoming the year of the artist-entrepreneur, citing Louis CK as an example, talking about dangerous legislation like SOPA, YCombinator’s plans to “Kill Hollywood”, Megabox (foiled by the MegaUpload takedown) and Pirate Bay’s evolution of file sharing to include rapid prototype designs. Matt’s final point was that industry is curtaining innovation – and we need to innovate past this problem.
  • Chris Hall (@chrisrhall) spoke about “Grannies being the future” – using examples of early retirement leaving pensioners with money and an opportunity to become entrepreneurs (given life expectancy of 81 years for a man in the UK, and citing Trevor Baylis as an example). I think hit onto something here – we need to embrace experience to create new opportunities for the young, but I’m not sure how many more people will enjoy early retirement, or that there will be much money sloshing around from property as we increasingly find it necessary to have 35 year and even multi-generation mortgages.
  • James Greenaway (@jvgreenaway) talked about social accreditation – taking qualifications online, alongside our social personas. We gain achievements on our games consoles, casual games (Farmville), social media (Foursquare), crowdsourcing (Stack Overflow) etc. – so why not integrate that with education (P2PU, eHow and iTunes U) and open all of our achievements to the web. James showed more examples to help with reputation management (spider  graphs showing what we’re good at [maybe combined with a future of results-oriented working?]) and really sees a future for new ways of assessing and proving skills becoming accepted.
  • Ashley Pollak from ETIO spoke about the return of craft, as we turn off and tune out. Having only listened to Radio 4’s adaptation of Susan Maushart’s Winter of Our Disconnect the same day, I could relate to the need to step back from the always connected world and find a more relevant, less consuming experience. And as I struggle to balance work and this blog post this morning I see advantages in reducing the frequency of social media conversations but increasing the quality!
  • Ravensbourne’s Chris Thompson spoke about virtual innovation – how Cisco is creating a British Innovation Gateway to connect incubators and research centres of excellence – and how incubation projects can now be based in the cloud and are no longer predicated on where a university is located, but where ideas start and end.
  • The next pitch was about new perspectives – as traditional photography dies (er… not on my watch) in favour of new visual experiences. More than just 3D but plenoptic (or light field) cameras, time of flight cameras, depth sensors, LIDAR and 3D scanning and printing. There are certainly some exciting things happening (like Tesco Augmented Reality) – and the London 2012 Olympics will e filmed in 3D and presented in interactive 360 format.
  • Augment and Mix was a quick talk about how RSA Animate talks use a technique called scribing to take content that is great, but maybe not that well presented, and make it entertaining by re-interpreting/illustrating. Scribing may be “sooo last year” but there are other examples too – such as “Shakespeare in 90 seconds” and “Potted Potter”.
  • Lee Morgenroth’s (@leemailme‘s) pitch was for Leemail – a system that allows private addresses to be used for web sign-ups (one per site) and then turned on/off at will. My more-technically minded friends say “I’ve been doing that for years with different aliases” – personally I just use a single address and a decent spam filter (actually, not quite as good since switching from GMail to Office 365) – but I think Lee may be on to something for non-geeks… let’s see!
  • Finally, we saw a film from LS:N profiling some key trends from the last 10 years, as predicted and in reality (actually, I missed most of that for a tour of Ravensbourne!)

There were some amazing talks and some great ideas – I certainly took a lot away from last night in terms of inspiration so thank you to all the speakers. Thanks also to Matt, Michelle (@michelleflynn) and everyone else involved in making last night’s TFT (and all the previous events) happen. It’s been a blast – and I look forward to seeing what happens next…

[I rushed this post out this morning but fully intend to come back and add more links, videos, presentations, etc. later – so please check back next week!]

Virtual Worlds (@stroker at #DigitalSurrey)

Last night saw another Digital Surrey event which, I’ve come to find, means another great speaker on a topic of interest to the digerati in and around Farnham/Guildford (although I also noticed that I’m not the only “foreigner” at Digital Surrey with two of the attendees travelling from Brighton and Cirencester).

This time the speaker was Lewis Richards, Technical Portfolio Director in the Office of Innovation at CSC, and the venue was CSC’s European Innovation Centre.  Lewis spoke with passion and humour about the development of virtual worlds, from as far back as the 18th century. As a non-gamer (I do have an Xbox, but I’m not heavily into games), I have to admit quite a lot of it was all new to me, but fascinating nevertheless, and I’ve drawn out some of the key points in this post (my edits in []) – or you can navigate the Prezi yourself – which Lewis has very kindly made public!

  • The concept of immersion (in a virtual world) has existed for more than 200 years:
  • In 1909, E.M Forster wrote The Machine Stops, a short story in which everybody is connected, with a universal book for reference purposes, and communications concepts not unlike today’s video conferencing – this was over a hundred years ago!
  • In [1957], Morton Heilig invented the Sensorama machine which allowed viewers to enter a world of virtual reality with a combination of film and mechanics for a 3D, stereo experience with seat vibration, wind in the hair and smell to complete the illusion.
  • The first heads up displays and virtual reality headsets were patented in the 1960s (and are not really much more usable today).
  • In 1969, ARPANET was created – the foundation of today’s Internet and the world wide web [in 1990].
  • In [1974], the roleplay game Dungeons and Dragons was created (in book form), teaching people to empathise with virtual characters (roleplay is central to the concept of virtual worlds); the holodeck (Rec Room) was first referenced in a Star Trek cartoon in 1974; and, back in 1973, Myron Krueger had coined the term artificial reality [Krueger created a number of virtual worlds in his work (glowflow, metaplay, physic space, videoplace)].
  • Lewis also showed a video of a “B-Spline Control” which is not unlike the multitouch [and Kinect tracking] functionality we take for granted today – indeed, pretty much all of the developments from the last 40-50 years have been iterative improvements – we’ve seen no real paradigm shifts.
  • 1980s developments included:
    • William Gibson coined the term cyberspace in his short stories (featured in Omni magazine).
    • Disney’s Tron; a film which still presents a level of immersion to which we aspire today.
    • The Quantum Link network  service, featuring the first multiplayer network game (i.e. not just one player against a computer).
  • In the 1990s, we saw:
    • Sir Tim Berners-Lee‘s World Wide Web [possibly the biggest step forward in online information sharing, bringing to life E.M. Forster’s universal book].
    • The first use of the term avatar for a digital manifestation of oneself (in Neal Stephenson’s Snow Crash).
    • Virtual reality suits
    • Sandboxed virtual worlds (AlphaWorld)
    • Strange Days, with the SQUID (Super-conducting Quantum Interference Device) receptor – still looking for immersion – getting inside the device – and The Matrix was still about “jacking in” to the network.
    • Virtual cocoons (miniaturised, electronic, versions of the Sensorama – but still too intrusive for mass market adoption)
  • The new millennium brought Second Life (where, for a while, almost every large corporation had an island) and World of Warcraft (WoW) – a behemoth in terms of revenue generation – but virtual worlds have not really moved forward. Social networking blinded us and took the mass market along a different path for collaboration; meanwhile kids do still play games and virtual reality is occuring – it’s just not in the mainstream.
  • Lewis highlighted how CSC uses virtual worlds for collaboration; how they can also be used as training aids; and how WoW encouraged team working and leadership, and how content may be created inside virtual worlds with physical value (leading to virtual crime).
  • Whilst virtual reality is not really any closer as a consumer concept than in 1956 there are some real-world uses (such as virtual reality immersion being used to take away feelings of pain whilst burns victims receive treatment).
  • Arguably, virtual reality has become, just, “reality” – everywhere we go we can communicate and have access to our “stuff” – we don’t have to go to a virtual world but Lewise asks if we will ever give up the dream of immersion – of “jacking in” [to the matrix].
  • What is happening is augmented reality – using our phone/tablets, etc. to interact between physical and virtual worlds. Lewis also showed some amazing concepts from Microsoft Research, like OmniTouch, using a short-range depth camera and a pico projector to turn everyday objects into a surface to interact with; and Holodesk for direct 3D interactions.
  • Lewis explained that virtual worlds are really a tool – the innovation is in the technology and the practical uses are around virtual prototyping, remote collaboration, etc. [like all innovations, it’s up to us to find a problem, to which we can apply a solution and derive value – perhaps virtual worlds have tended to be a technology looking for a problem?]
  • Lewis showed us CSC’s Teleplace, a virtual world where colleagues can collaborate (e.g. for virtual bid rooms and presentations), saving a small fortune in travel and conference call costs but, just to finish up with a powerful demo, he asked one of the audience for a postcode, took the Google Streetview URL and pasted it into a tool called Blue Mars Lite – at which point his avatar could be seen running around inside Streetview. Wow indeed! That’s one virtual world in which I have to play!

After hours at UK TechDays

Over the last few years, I’ve attended (and blogged in detail about) a couple of “after hours” events at Microsoft – looking at some of the consumer-related items that we might do with out computers outside of work (first in May 2007 and then in November 2008).

Tonight I was at another one – an evening event to complement the UK TechDays events taking place this week in West London cinemas – and, unlike previous after hours sessions, this one did not even try and push Microsoft products at us (previous events felt a bit like Windows, Xbox and Live promotions at time) – it just demonstrated a whole load of cool stuff that people might want to take a look at.

I have to admit I nearly didn’t attend – the daytime UK TechDays events have been a little patchy in terms of content quality and I’m feeling slightly burned out after what has been a busy week with two Windows Server User Group evening events on top of UK TechDays and the normal work e-mail triage activities.  I’m glad I made it though and the following list is just a few of the things we saw Marc Holmes, Paul Foster and Jamie Burgess present tonight:

  • A discussion of some of the home network functionality that the guys are using for media, home automation etc. – predictably a huge amount of Microsoft media items (Media Center PCs, Windows Home Server, Xbox 360, etc.) but also the use of  X10, Z-Wave or RFXcom for pushing USB or RF signals around for home automation purposes, as well as Ethernet over power line for streaming from Media Center PCs.  Other technologies discussed included: Logitech’s DiNovo Edge keyboard and Harmony One universal remote control; SiliconDust HD HomeRun for sharing DVB-T TV signals across Ethernet to PCs; using xPL to control home automation equipment.
  • Lego Mindstorms NXT for building block robotics, including the First Lego League –  to inspire young people to get involved with science and technology in a positive way.
  • Kodu Game Lab – a visual programming language made specifically for creating games that is designed to be accessible for children and enjoyable for anyone.
  • Developing XNA games with XNA Game Studio and Visual Studio, then deploying them to Xbox or even running them in the Windows Phone emulator!  Other related topics included the use of the Freescale Flexis JM Badge board to integrate an accelerometer with an XNA game and GoblinXNA for augmented reality/3D games development.  There’s also a UK XNA user group.
  • A look at how research projects (from Microsoft Research) move into Labs and eventually become products after developers have optimised and integrated them.  Microsoft spent $9.5bn on research and development in 2009 and some of the research activities that have now made it to life include Photosynth (which became a Windows client application and is now included within Silverlight), the Seadragon technologies which also became a part of Silverlight (Deep Zoom) and are featured in the Hard Rock Cafe Memorabilia site.  A stunning example is Blaise Aguera y Arcas’ TED 2010 talk on the work that Microsoft is doing to integrate augmented reality maps in Bing – drawing on the Seadragon technologies to provide fluidity whilst navigating maps in 3D but that environment can be used as a canvas for other things – like streetside photos (far more detailed than Google Streetview).  In his talk (which is worth watching and embedded below), Blaise navigates off the street and actually inside Seattle’s Pike Place market before showing how the Microsoft imagery can be integrated with Flickr images (possibly historical images for “time travel”) and even broadcasting live video.  In addition to the telepresence (looking from the outside in), poins of interest can be used to look out when on the ground and get details of what’s around and even looking up to the sky and seeing integration with the Microsoft Research WorldWide Telescope.
  • Finally, Paul spoke about his creation of a multitouch (Surface) table for less than £100 (using CCTV infrared cameras, a webcam with the IR filter removed and NUI software – it’s now possible to do the same with Windows 7) and a borrowed projector before discussing his own attempts at virtual reality in his paddock at home.

Whilst I’m unlikely to get stuck into all of these projects, there is plenty of geek scope here – I may have a play with home automation and it’s good to know some of the possibilities for getting my kids involved with creating their own games, robots, etc. As for Blaise Aguera y Arcas’ TED 2010 talk it was fantastic to see how Microsoft still innovates and (I only wish that all of the Bing features were available globally… here in the UK we don’t have all of the functionality that’s available stateside).