The annotated world – the future of geospatial technology? (@EdParsons at #DigitalSurrey)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Tonight’s Digital Surrey was, as usual, a huge success with a great speaker (Google’s @EdParsons) in a fantastic venue (Farnham Castle).  Ed spoke about the future of geospatial data – about annotating our world to enhance the value that we can bring from mapping tools today but, before he spoke of the future, he took a look at how we got to where we are.

What is geospatial information? And how did we get to where we are today?

Geospatial information is very visual, which makes it powerful for telling stories and one of the most famous and powerful images is that of the Earth viewed from space – the “blue marble”. This emotive image has been used many times but has only been personally witnessed by around 20 people, starting with the Apollo 8 crew, 250000 miles from home, looking at their own planet. We see this image with tools like Google Earth, which allows us to explore the planet and look at humankind’s activities. Indeed about 1 billion people use Google Maps/Google Earth every week – that’s about a third of the Internet population, roughly equivalent to Facebook and Twitter combined [just imagine how successful Google would be if they were all Google+ users…]. Using that metric, we can say that geospatial data is now pervasive – a huge shift over the last 10 years as it has become more accessible (although much of the technology has been around longer).

The annotated world is about going beyond the image and pulling out info otherwise invisible information, so, in a digital sense, it’s now possible to have map of 1:1 scale or even beyond. For example, in Google Maps we can look at StreetView and even see annotations of buildings. This can be augmented with further information (e.g restrictions in the directions in which we can drive, details about local businesses) to provide actionable insight. Google also harvests information from the web to create place pages (something that could be considered ethically dubious, as it draws people away from the websites of the businesses involved) but it can also provide additional information from image recognition – for example identifying the locations of public wastebins or adding details of parking restrictions (literally from text recognition on road signs). The key to the annotated web is collating and presenting information in a way that’s straightforward and easy to use.

Using other tools in the ecosystem, mobile applications can be used to easily review a business and post it via Google+ (so that it appears on the place page); or Google MapMaker may be used by local experts to add content to the map (subject to moderation – and the service is not currently available in the UK…).

So, that’s where we are today… we’re getting more and more content online, but what about the next 10 years?

A virtual (annotated) world

Google and others are building a virtual world in three dimensions. In the past, Google Earth pulled data from many sets (e.g. building models, terrain data, etc.) but future 3D images will be based on photographs (just as, apparently, Nokia have done for a while). We’ll also see 3D data being using to navigate inside buildings as well as outside. In one example, Google is working with John Lewis, who have recently installed Wi-Fi in their stores – to use this to determine a user’s location determination and combine this with maps to navigate the store. The system is accurate to about 2-3 metres [and sounds similar to Tesco’s “in store sat-nav” trial] and apparently it’s also available in London railway stations, the British Museum, etc.

Father Ted would not have got lost in the lingerie department if he had Google's mapping in @! says @ #DigitalSurrey
@markwilsonit
Mark Wilson

Ed made the point that the future is not driven by paper-based cartography, although there were plenty of issues taken with this in the Q&A later, highlighting that we still use ancient maps today, and that our digital archives are not likely to last that long.

Moving on, Ed highlighted that Google now generates map tiles on the fly (it used to take 6 weeks to rebuild the map) and new presentation technologies allow for client-side rendering of buildings – for example St Pauls Cathedral, in London. With services such as Google Now (on Android), contextual info may be provided, driven by location and personality

With Google’s Project Glass, that becomes even more immersive with augmented reality driven by the annotated world:

[youtube=http://www.youtube.com/watch?v=9c6W4CCU9M4]

Although someone also mentioned to me the parody which also raises some good points:

[youtube=http://www.youtube.com/watch?v=t3TAOYXT840]

Seriously, Project Glass makes Apple’s Siri look way behind the curve – and for those who consider the glasses to be a little uncool, I would expect them to become much more “normal” over time – built into a normal pair of shades, or even into prescription glasses… certainly no more silly than those Bluetooth earpieces the we used to use!

Of course, there are privacy implications to overcome but, consider what people share today on Facebook (or wherever) – people will share information when they see value in it.

Big data, crowdsourcing 2.0 and linked data

At this point, Ed’s presentation moved on to talk about big data. I’ve spent most of this week co-writing a book on this topic (I’ll post a link when it’s published) and nearly flipped when I heard the normal big data marketing rhetoric (the 3 Vs)  being churned out. Putting aside the hype, Google should know quite a bit about big data (Google’s search engine is a great example and the company has done a lot of work in this area) and the annotated world has to address many of the big data challenges including:

  • Data integration.
  • Data transformation.
  • Near-real-time analysis using rules to process data and take appropriate action (complex event processing).
  • Semantic analysis.
  • Historical analysis.
  • Search.
  • Data storage.
  • Visualisation.
  • Data access interfaces.

Moving back to Ed’s talk, what he refers to as “Crowdsourcing 2.0” is certainly an interesting concept. Citing Vint Cerf (Internet pioneer and Google employee), Ed said that there are an estimated 35bn devices connected to the Internet – and our smartphones are great examples, crammed full of sensors. These sensors can be used to provide real-time information for the annotated world: average journey times based on GPS data, for example; or even weather data if future smartphones were to contain a barometer.

Linked data is another topic worthy of note, which, at its most fundamental level is about making the web more interconnected. There’s a lot of work been done into ontologies, categorising content, etc. [Plug: I co-wrote a white paper on the topic earlier this year] but Google, Yahoo, Microsoft and others are supporting schema.org as a collection of microformats, which are tags that websites can use to mark up content in a way that’s recognised by major search providers. For example, a tag like <span itemprop="addresscountry">Spain</span> might be used to indicate that Spain is a country with further tags to show that Barcelona is a city, and that Noucamp is a place to visit.

Ed’s final thoughts

Summing up, Ed reiterated that paper maps are dead and that they will be replaced with more personalised information (of which, location is a component that provides content). However, if we want the advantages of this, we need to share information – with those organisations that we trust and where we know what will happen with that info.

Mark’s final thoughts

The annotated world is exciting and has stacks of potential if we can overcome one critical stumbing point that Ed highliughted (and I tweeted):

In order to create a more useful, personal, contextual web, organisations need to gain our trust to share our information #DigitalSurrey
@markwilsonit
Mark Wilson

Unfortunately, there are many who will not trust Google – and I find it interesting that Google is an advocate of consuming open data to add value to its products but I see very little being put back in terms of data sets for others to use. Google’s argument is that it spent a lot of money gathering and processing that data; however it could also be argued that Google gets a lot for free and maybe there is a greater benefit to society in freely sharing that information in a non-proprietary format (rather than relying on the use of Google tools). There are also ethical concerns with Google’s gathering of Wi-Fi data, scraping website content and other such issues but I expect to see a “happy medium” found, somewhere between “Don’t Be Evil” and “But we are a business after all”…

Thanks as always to everyone involved in arranging and hosting tonight’s event – and to Ed Parsons for an enlightening talk!

Virtual worlds in 2022 (Dr Richard Bartle at #digitalsurrey)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It’s been a few months since I’ve been along to a Digital Surrey event but last night I went to see Dr Richard Bartle (the massively multiplayer online gaming pioneer whose work on personality types was mentioned last year in the first Digital Surrey event I attended, Michael Wu’s talk on the science of gamification) speak about the future of virtual worlds.

Dr Richard Bartle talks at Digital SurreyIn contrast to Lewis Richards’ Virtual Worlds talk at CSC last year, Richard Bartle’s talk was focused on three possible courses of development for the massively multiplayer online gaming (MMO) industry (slides are available).  He started out by commenting that, had he been asked the same question in 1992, he’d think we would be further ahead than we are by now…

Three views of the future

In the first view of virtual worlds in 2022, Richard looked at the legal issues that threaten online gaming, including:

  • Applying reasonable laws wrongly – for example a well-meaning judge applying the same rules in World of Warcraft to Second Life and “its just a game” is no longer a way in which to avoid the real world.
  • Unfair contracts – with End User Licensing Agreements (EULAs) found to be unfair and ownership over virtual goods bringing property laws into play (Linden Labs, 2007).
  • Intellectual property laws – ownership prevents the destruction or alteration of virtual property; it’s impossible to stop people from selling “stuff” (even if it ruins the game); an inability to deny access by banning things that have unintended consequences (leaving gaming open to compensation claims); and implications of publishing works of art (licensing, records of origin, etc.) – what happens if the rights to an object upon which others are built is suddenly removed?
  • Gaming laws – even free to play games have value in their objects (as proven in Dutch law in 2012) and, if everything has a value, gaming is essentially about chance and cash rewards – i.e. gambling! Some parts of the world (e.g. the USA) have fierce laws on gambling…
  • Money laundering – with a scenario something like: 1) Steal real-world money; 2) hand money to a front; 3) front buys virtual currency; 4) pass virtual currency to game characters; 5) sell virtual goods (legitimately); 6) clean money!
  • Taxation laws – if virtual money has real world value, then it becomes taxable (both income and sales).
  • Patents – it’s possible to patent obvious “inventions” for very little outlay but it costs a lot to get a patent revoked – this stifles innovation.

For these reasons, Richard Bartle says he sees a bleak future when he goes to legal or policy conferences – just a programmers see bugs in code, lawyers see bugs in laws – and accountants see bugs everywhere (it’s their job to highlight problems).

In the second view, the repeated incursions of reality into virtual worlds gradually break down the distinction between real and virtual – and virtual worlds are no longer imaginary places of freedom and adjuncts to reality.  New MMOs open up and recruit players from existing MMOs – but these are the disloyal players – or they get MMO “newbies”. With too much reality, MMOs become unsustainable as fantasy and existing players’ expectations are lowered whilst new players didn’t have high expectations to start with… Meanwhile there’s the question of monetisation – with 95% of casual gamers being funded by the 5% who pay – those who pay have the ability to do so (i.e. are richer in real life) and their ability to become more successful in the game removes any sense of fair play – is it still a game if one can buy success? Reaching out to children becomes attractive – both as a source of new gamers and also because micropayments make it easier to take money from children as the credits are paid for by the parents. And, as non-gamers use “gamification” in marketing and “edutainment” as a teaching aid, the attempts to combine the fun games and “un-fun” education lead to nothing more than un-fun games. In effect the sanctity of game spaces as retreats from reality disappears…

In Richard Bartle’s third view of the future, MMO designers found themselves able to influence politics. In 2010, the median age of the UK population was 40 (41 for women, 38 for men) so half the population were born in 1970 or later and grew up with access to computers. These people play games, don’t feel addicted to them and resent politicians imply gamers are psychopaths. Consequently politicians representing games as anti-social find themselves unpopular. Gaming flowers with new casual games, new players, and simplified creation of virtual worlds.

When Richard speaks to designers and developers he sees passion, imagination, and freedom of spirit because MMOs give something that your can’t get elsewhere – the ability to be yourself. If that goes away, they simply create new virtual worlds.

What is most likely?

As for which future view is most likely, Richard’s whole presentation was linked through three films (High Noon, The Misfits, and Dirty Harry) – all of which featured actors who were also in The Good the Bad and the Ugly. So he summed up the likely courses using that film:

  • The Good: virtual worlds provide a place for humans to be humans.
  • The Bad: virtual worlds are stifled with real-world laws and policy.
  • The Ugly: virtual worlds become mundane.

He considers that MMOs provide too much that people want in order not to be successful and that, if legislated away to obscurity, that would only be a temporary state and they would return. I guess we’ll see when we look back from 2022…

My view

As a non-gamer (perhaps more accurately a casual gamer – I play the odd game on a tablet or a smartphone, and I do have an Xbox 360 – upon which my sons and I play Lego Pirates of the Caribbean and Kinect Sports, so I guess they are virtual worlds?), I found a lot of Richard’s views on “reality” rather difficult to grasp – and I got the impression that I wasn’t alone. Even so, the vision of the real world tainting the virtual world was fascinating – and perhaps, I fear, a little too real (it’s not just online gaming that is impacted by well-meaning but ultimately flawed real world decisions).

Speaking with one of the other attendees at the event, who mentioned someone had been questioning the link between the Internet and the real world, I guess my inability to understand the mindset of a MMO gamer is not so far removed from those who can’t see why I would want to live my life on social media…

Credits

Thanks again to the Digital Surrey team for staging another worthwhile event, sponsored by Martin Stillman from the Venture Strategy Partnership and hosted by Cameron Wilson from Surrey Enterprise.

[Update 27 March 2012: added link to Richard’s blog post and presentation materials]

12 tips for digital marketers (@allisterf at #digitalsurrey)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In yesterday’s post about marketing in a digital world, I mentioned Allister Frost’s 12 tips for marketers but didn’t go into the details. You can find them at the back of his deck on SlideShare but I took some notes too so I’ve added them here:

  1. Invest in social leadership and social players – it may be you, or it may be somebody else who sets the strategic direction but find people with energy and enthusiasm to make it happen. Do not confuse the two roles: if you’re the social leader don’t play as you’ll lose sight of the strategy.
  2. Invest in tools and expertise – ask tough questions of vendors selling tools.
  3. Develop your social recommendation optimisation (SRO) strategy – optimise everything so it become recommended through social channels. Not to be confused with social media optimisation (SMO) which is short-sighted (too focused on channels).
  4. Listen, then engage – don’t assume you know the answers – understand the channel first.
  5. Answer the social telephone – if a phone was ringing, you would pick it up so treat social in the same way to avoid losing opportunities.
  6. Moderate wisely – if you don’t, your brand can become associated with spam.
  7. Create social objects – think about how they get to customers. May be a video, a white paper, or something else…
  8. Make it better when shared – thank, reward and encourage.
  9. Handpick your interfaces – go and find the channels where your audience is.
  10. Be remarkable – do things that people remark upon.
  11. Show some personality – there is a balance between appearing as the juvenile delinquent or the company robot and you can move – just don’t stay at the extremes.
  12. Fail fast, learn faster – continuously pilot-test (again and again…)

Marketing in a digital world (@allisterf at #digitalsurrey)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last Thursday was Digital Surrey night and this month’s speaker was Allister Frost, Head of Digital Marketing Strategy at Microsoft.  Allister gave an engaging talk on “doing marketing in a digital world” and, whilst there might have been a couple of things I wasn’t entirely convinced of, I’m not a marketing professional (even if I spend a good chunk of my day in what could be described as marketing) so I’ll defer to those with more experience.

Allister has kindly shared his slides, along with some supporting materials – which makes my task of blogging about the evening a lot easier, but I decided to have a play with Storify for this one:

I’m in two minds about this approach to curating the information from the evening… it took just as long as writing a blog post and all of Google’s (sorry, Bing’s) link love goes to another site… but it was worth a try (and it’s definitely a great tool when most of the content is already spread around the web). If you have any content from the evening that I missed, please get in touch and I’ll add it to the story.

[Update 22 December 2017: Storify is closing down. I exported the content in HTML and JSON format but many of the links are now dead (many years have passed) so there’s little value in recreating this post]

Virtual Worlds (@stroker at #DigitalSurrey)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last night saw another Digital Surrey event which, I’ve come to find, means another great speaker on a topic of interest to the digerati in and around Farnham/Guildford (although I also noticed that I’m not the only “foreigner” at Digital Surrey with two of the attendees travelling from Brighton and Cirencester).

This time the speaker was Lewis Richards, Technical Portfolio Director in the Office of Innovation at CSC, and the venue was CSC’s European Innovation Centre.  Lewis spoke with passion and humour about the development of virtual worlds, from as far back as the 18th century. As a non-gamer (I do have an Xbox, but I’m not heavily into games), I have to admit quite a lot of it was all new to me, but fascinating nevertheless, and I’ve drawn out some of the key points in this post (my edits in []) – or you can navigate the Prezi yourself – which Lewis has very kindly made public!

  • The concept of immersion (in a virtual world) has existed for more than 200 years:
  • In 1909, E.M Forster wrote The Machine Stops, a short story in which everybody is connected, with a universal book for reference purposes, and communications concepts not unlike today’s video conferencing – this was over a hundred years ago!
  • In [1957], Morton Heilig invented the Sensorama machine which allowed viewers to enter a world of virtual reality with a combination of film and mechanics for a 3D, stereo experience with seat vibration, wind in the hair and smell to complete the illusion.
  • The first heads up displays and virtual reality headsets were patented in the 1960s (and are not really much more usable today).
  • In 1969, ARPANET was created – the foundation of today’s Internet and the world wide web [in 1990].
  • In [1974], the roleplay game Dungeons and Dragons was created (in book form), teaching people to empathise with virtual characters (roleplay is central to the concept of virtual worlds); the holodeck (Rec Room) was first referenced in a Star Trek cartoon in 1974; and, back in 1973, Myron Krueger had coined the term artificial reality [Krueger created a number of virtual worlds in his work (glowflow, metaplay, physic space, videoplace)].
  • Lewis also showed a video of a “B-Spline Control” which is not unlike the multitouch [and Kinect tracking] functionality we take for granted today – indeed, pretty much all of the developments from the last 40-50 years have been iterative improvements – we’ve seen no real paradigm shifts.
  • 1980s developments included:
    • William Gibson coined the term cyberspace in his short stories (featured in Omni magazine).
    • Disney’s Tron; a film which still presents a level of immersion to which we aspire today.
    • The Quantum Link network  service, featuring the first multiplayer network game (i.e. not just one player against a computer).
  • In the 1990s, we saw:
    • Sir Tim Berners-Lee‘s World Wide Web [possibly the biggest step forward in online information sharing, bringing to life E.M. Forster’s universal book].
    • The first use of the term avatar for a digital manifestation of oneself (in Neal Stephenson’s Snow Crash).
    • Virtual reality suits
    • Sandboxed virtual worlds (AlphaWorld)
    • Strange Days, with the SQUID (Super-conducting Quantum Interference Device) receptor – still looking for immersion – getting inside the device – and The Matrix was still about “jacking in” to the network.
    • Virtual cocoons (miniaturised, electronic, versions of the Sensorama – but still too intrusive for mass market adoption)
  • The new millennium brought Second Life (where, for a while, almost every large corporation had an island) and World of Warcraft (WoW) – a behemoth in terms of revenue generation – but virtual worlds have not really moved forward. Social networking blinded us and took the mass market along a different path for collaboration; meanwhile kids do still play games and virtual reality is occuring – it’s just not in the mainstream.
  • Lewis highlighted how CSC uses virtual worlds for collaboration; how they can also be used as training aids; and how WoW encouraged team working and leadership, and how content may be created inside virtual worlds with physical value (leading to virtual crime).
  • Whilst virtual reality is not really any closer as a consumer concept than in 1956 there are some real-world uses (such as virtual reality immersion being used to take away feelings of pain whilst burns victims receive treatment).
  • Arguably, virtual reality has become, just, “reality” – everywhere we go we can communicate and have access to our “stuff” – we don’t have to go to a virtual world but Lewise asks if we will ever give up the dream of immersion – of “jacking in” [to the matrix].
  • What is happening is augmented reality – using our phone/tablets, etc. to interact between physical and virtual worlds. Lewis also showed some amazing concepts from Microsoft Research, like OmniTouch, using a short-range depth camera and a pico projector to turn everyday objects into a surface to interact with; and Holodesk for direct 3D interactions.
  • Lewis explained that virtual worlds are really a tool – the innovation is in the technology and the practical uses are around virtual prototyping, remote collaboration, etc. [like all innovations, it’s up to us to find a problem, to which we can apply a solution and derive value – perhaps virtual worlds have tended to be a technology looking for a problem?]
  • Lewis showed us CSC’s Teleplace, a virtual world where colleagues can collaborate (e.g. for virtual bid rooms and presentations), saving a small fortune in travel and conference call costs but, just to finish up with a powerful demo, he asked one of the audience for a postcode, took the Google Streetview URL and pasted it into a tool called Blue Mars Lite – at which point his avatar could be seen running around inside Streetview. Wow indeed! That’s one virtual world in which I have to play!

The future of mobile telecommunications (@jonin60seconds at #digitalsurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written before about Digital Surrey but I don’t think I’ve ever said what it is so, as Abigail Harrison (@abigailh) explained in her introduction to tonight’s event, Digital Surrey is a free community; a network to meet up, learn, share and to generate opportunities, whether that’s from the information found, someone you met, or something else.

What never ceases to amaze me is how the Digital Surrey organisers come up with a constant stream of great speakers and tonight was no exception, with PayPal UK’s Jon Bishop (@jonin60seconds) entertaining us all as he talked about the future (in fact, the now) of mobile communications – generating plenty of discussion in the process.

I’ve tried to capture the key points from Jon’s talk in this post, together with a few comments of my own in [ ]. Once the video/slides are available, I’ll come back and add some links to them too:

  • If you want to visit the future of mobile today, go to Africa:
    • South Africa has 6m Internet users, but only 750,000 are on fixed-line connections and 57% never use the “desktop” Internet.
    • Africa also has the most engaged social network in the world: MXit was launched in 2003 and has 22bn messages sent each month (compared with 8bn on Twitter), with 45 hours of average monthly usage (compared with 15 on Facebook).
    • Imagine paying for concert tickets, food, or a taxi in the UK using SMS – no chance! But it’s happening in Kenya using a system called M-Pesa (Swahili for mobile money) and, Vodafone subsidiary, Safaricom is now the biggest bank in east Africa. Half of the population uses it with no need for a bank accounts (only 20% of people have a bank account, whereas 60% have access to a mobile phone).
    • This sort of activity is a necessity in Africa because of underinvested wired networks, expensive broadband, a rural population, large informal sector (traders, etc.), low bank account penetration and high mobile penetration. On top of this, using a mobile is safer than carrying cash around!
    • Not only is Africa transforming mobile communications, but mobile is transforming Africa. According to Nokia, each 10% increase in mobile penetration corresponds to a 0.8% increase in GDP.
  • Attempting to push aside some misconceptions about mobile, Jon highlighted that:
    • Blackberry has younger users than the Apple iPhone (which is too expensive and doesn’t have Blackberry Messenger functionality).
    • Google Android beats iOS as the top selling mobile platform worldwide, although the Apple iPad is currently unrivaled in the tablet marketplace. Whilst [Apple] iOS and [Google] Android are the major players today, keep an eye on Amazon, Samsung, HTC, and [Microsoft/Nokia] Windows Phone.
    • Are mobile carriers profit generating machines? Apparently not on current models: the time will come when profitability will dissipate as the cost per GB transferred is going down but the volumes of data use are increasing. As those lines converge, mobile operators have a problem.
    • Is it true that mobile ads don’t work? Actually they do! They account for $3.3bn in advertising spend, with $1bn going to Google; 56% of executives click on mobile ads and they can deliver better value than web ads.
  • Using the example of his friend “Shane” (a 15 year-old), Jon highlighted that none of Shane’s friends have iPhones – almost all use Blackberrys. Some have Android as a second phone (for games like Farmville) but, without BBM (Blackberry Messenger), today’s teenagers are out of the loop. They don’t talk any more: the only person who calls Shane is his mum; and he only makes a call “ when I’m in shit mate, innit” (i.e. in trouble). This prompts some questions as to what happens when young people enter the business world – if they talk in front of their friends but not their teachers – what about their colleagues? Teenagers wake up with a phone in their hand [so does this 39- year-old]; they use them to organise everything (via BBM [, Facebook, etc.]). And they like phones with buttons (to type quickly).
  • Jon moved on to look at how online and mobile communications have changed communities:
    • We still have geographic communities but people come together to share a common interest, whether that’s the Ford RS Owners Club, Knitting, or whatever [Digital people in Surrey?]. The speed and level of interaction is fuelled by mobile, on the go, access although, of course, this depends on the capability of the platform and access to wireless communications.
    • Jon suggested that last month’s riots in London and elsewhere were interest based, not geographic as, in a few hours they jumped north to south London before spreading more widely across the city. Perhaps the common interest was that they don’t like the Police, or perhaps it was just that they wanted free stuff from JD Sports! After the first few hours media coverage fuelled the interest, so it’s difficult to say whether the mobile networks had any influence.
    • We’ve had social networks for years but now they are mobile. What does this mean for policing? Well, one senior Police Officer in the audience suggested that it means they need to be quicker!
  • Another area changed by mobile is photography. The social web has made the picture the beginning of the journey, not the end.
    • We used to take a roll of film, develop it, and put the pictures in an album, or a shoe box.
    • With digital we took the pictures, used the memory card to get them onto a computer and then share them/interact with them.
    • Mobile means we can skip the computer and gain instant gratification.
    • 150m images were shared on Instagram in a year [it may make bad pictures look deliberate, butAndy Piper commented it’s the 5th most valuable startup on the planet right now – and it’s not even on Android yet] and the Hudson River plane crash is often cited as an example of where news broke first on social networks.
    • Now we also know who took the picture, where it was taken (GPS), using what camera and what settings (EXIF), who’s in it (tagging), as well as the time and date.
  • Mobiles have also changed how we interact with the world, using a multitude of data points:
    • Compare jogging in the 80s with jogging in 2011. Whereas once we may have had a Sony Walkman and a Casio watch with an illuminated screen to track time, today we can track our steps, average speed, route, effort (calories burned), top speed, distance and time. We have become mobile data points. Phones can track sleep patterns too [and Runkeeper has established a health graph of interconnected services].
  • Mobile is not the future, it is now. Not quite as now as in Africa but it is mainstream:
    • According to Morgan Stanley, Last year there were 670m 3G subscriptions (up 37% year on year).
    • More mobile devices are connected to Wi-Fi than PCs.
    • Smartphones will outsell PCs in 2012 – we are approaching an inflection point.
    • There will be more mobile Internet than wired by 2015.
    • And if you want a signal as to the way in which things are heading, the world’s biggest PC maker may stop selling PCs.
    • When we look at mobile payments PayPal is processing $10m in transactions each day, with $3bn from mobile this year (globally) – and they expect that to double next year. Customers spent $1bn at Amazon using their mobiles in the past year (and that’s pre-Kindle Fire).
  • The question of “what is mobile?” generated some discussion. Jon suggested it’s phones and tablets; not laptops; and that the operating system is a good indicator. Other suggestions were that it should be anything with a SIM; to do with where you use it, not what it is; of that it’s about whether the device fits in a pocket.
  • Moving on to QR codes, Jon highlighted shopping in the Korean subway and interaction with posters [I saw an example where rail passengers could scan for the appropriate timetable for their route]. Whilst it’s true that the application is a barrier to entry,14m Americans scanned a barcode in July 2011.
  • Mobile is changing B2C marketing with multi-tasking users using a smartphone whilst they are doing something else (72%) including listing to music (44%), watching TV (33%), using the Internet (29%) or playing games (27%). After searching for a business on the phone, 77% call or visit and 44% purchase in store or online. So what does this mean?
    • We are always on, always accessible.
    • We multi-task – so include a mobile call to action in advertisements.
    • We’re in a hurry – so sort out the mobile flow (ensure websites are optimised for mobile devices – 76% are not)!
    • We are ready to take action – so make it easy for us to do so!
    • Think about entry (search results, barcode, email banner link); landing (results, mobile pages and search ads) and call to action (call, click, download, access map or directions).
  • On the B2B front, executives use up to 4 devices (laptop, company Blackberry, iPad, personal smartphone):
    • 55% use their mobile as a primary device with 80% preferring to access work email on the go.
    • We are ready to take action, to download and run B2B apps, click mobile ads, and to make purchases.
    • 200m people watch mobile videos on YouTube each day and 75% of executives watch work related videos and share.
  • So, how is the world of mobile changing? Whilst we can’t predict 10 years ahead, Jon highlighted some technologies that are already here but not mainstream:
    • Near Field Communications (NFC): for data sharing, payments, device pairing, advertising. Providing a communications platform with a simple tap, NFC is not as clumsy as a QR code and the chips are cheap so we’re Likely to see NFC widely adopted. Paypal have a video demonstrating the use of NFC for sending money.
    • HTML 5 arguably makes the web what it should be:
      • Application quality improves in a browser.
      • No more plugins (Flash, Silverlight, etc.).
      • More freedom and independence for publishers/developers.
      • 30% faster than Flash.
      • Improved profitability for publishers.
      • Easier compatibility, so quicker releases.
    • The cloud [which shouldn’t be marketed to consumers – it’s a B2B concept!] is changing our views on ownership of media [e.g. Spotify]. Storage is less important, connectivity more. Ordinary items can connect using the same technology.
    • Whilst it’s a bit early to be making predictions about its success, or otherwise, the arrival of the Amazon Kindle Fire looks like it will bring the first real competitor to Apple’s iPad ecosystem and Apple, Amazon and Google all have investments made (with more to make, potentially) in their content networks.

As ever, thanks to all at Digital Surrey for allowing me to attend – it always seems cheeky for a guy from the Buckinghamshire/Northamptonshire borders to visit a networking group for business people in Surrey but you make me so welcome! Thanks also to Jon, for letting me blog about his presentation contents. The next couple of events sound great too – watch the Digital Surrey website for details.

[Updated 3 October 2011:  to include Jon’s slides and link to his post]
[Updated 5 December 2011: to include the video of Jon’s talk]

From snapshots to social media – the changing picture of photography (@davidfrohlich at #digitalsurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My visits to Surrey seem to be getting more frequent… earlier tonight I was in Reigate, at Canon‘s UK headquarters for another great Digital Surrey talk.

The guest speaker was Professor David Frohlich (@davidfrohlich) from the University of Surrey Digital World Research Centre, who spoke about the changing picture of photography and the relationship between snapshots and social media, three eras of domestic photography, the birth and death of the album and lessons for social media innovation.

I often comment that I have little time for photography these days and all I do is “take snapshots of the kids” but my wife disagrees – she’s far less critical of my work and says I take some good pictures. It was interesting to see a definition of a snapshot though, with it’s origins in 1860’s hunting and “shooting from the hip” (without careful aim!). Later it became “an amateur photograph” so I guess yes, I do mainly take snapshots of the kids!

Professor Frohlich spoke of three values of snapshots (from research by Richard Chalfen in 1987 and Christopher Musello in 1979):

  • Identity.
  • Memory (triggers – not necessarily of when the photograph was taken but of events around that time).
  • Communication.

He then looked at a definition of social media (i.e. it’s a media for social interaction) and suggested that photographs were an early form of social media (since integrated into newer forms)!

Another element to consider is that of innovation and, using Philip Anderson and Michael L Tushman’s 1990 theory as an example, he described how old technological paths hit disruption, there’s then an era of fermentation (i.e. discontinuous development) before a dominant design appears and things stabilise again.  In Geoff Mulgan’s 2007 Process of Social Innovation it’s simply described as new ideas that work, or changing practice (i.e. everyday behaviour).

This led to the discussion of three eras of domestic photography. Following the invention of photography (1830-1840) we saw:

  1. The portrait path [plate images] (1839-1888) including cartes-de-visite (1854-1870)
  2. The Kodak path [roll film] (1888-1990) from the Kodak No. 1 camera in 1888, through the first Polaroid camera (1947), colour film cartridges (1963) which was disrupted with the birth of electronic still video photography (1980-1990)
  3. The digital path (from 1990)

What we find is that the three values of snapshots overlay this perfectly (although the digital era also has elements of identity it is mainly about communication):

Whilst the inventor of the photograph is known (actually Fox-Talbot’s Calotype/Talbottype and Daguerre’s Daguerrotype were both patented in 1839), it’s less well-known who invented the album.

Professor Frohlich explained that the album came into being after people swapped cartes-de-visite (just like today’s photographic business cards!) which became popular around 1850 as a standard portrait sized at 2.5″ x 4″.  These cards could be of individuals, or even famous people (Abraham Lincoln, or Queen Victoria) and in 1854, Disderi’s camera allowed mass production of images with several on a single sheet of paper.  By 1860 albums had been created to store these cards – a development from an earlier past-time of collecting autographs and these albums were effectively filled with images of family, people who visited and famous people – just as Facebook is today!

The Kodak era commenced after George Eastman‘s patent was awarded on 4 September 1888 for a personalised camera which was more accessible, less complex than portrait cameras, and marketed to women around the concept of the Kodak family album.  Filled with images of “high days and holidays” – achievements, celebrations and vacations – these were the albums that most of us know (some of us still maintain) and the concept lasted for the next century (arguably it’s still in existence today, although increasingly marginalised).

Whilst there were some threats (like Polaroid images) they never quite changed the dominant path of photography. Later, as people became more affluent, there were more prints and people built up private archives with many albums and loose photographs (stored in cupboards – just as my many of my family’s are in our loft!).

As photography met ICT infrastructure, the things that we could do with photography expanded but things also became more complex, with a complex mesh involving PCs, printers and digital camera. Whilst some manufacturers cut out the requirement for a computer (with cameras communicating directly to printers), there were two inventions that really changed things: the camera phone and the Internet:

  • Camera phones were already communications-centric (from the phone element), creating a new type of content, that was more about communications than storing memories. In 2002, Turo-Kimmo Lehtonen, Ilpo Koskinen and Esko Kurvine studied the use of mobile digital pictures, not as images for an album but images to say “look where I am”. Whilst technologies such as MMS were not used as much as companies like Nokia expected [largely due to transmission costs imposed by networks] we did see an explosion in online sharing of images.
  • Now we have semi-public sharing, with our friends on Facebook (Google+, etc.) and even wider distribution on Flickr. In addition, photographs have become multimedia objects and Professor Frohlich experimented with adding several types of audio to still images in 2004 as digital story telling.

By 2008, Abigail Durrant was researching photographic displays and intergenerational relationships at home. She looked at a variety of display devices but, critically, found that there was a requirement for some kind of agreement as to what could be displayed where (some kind of meta rules for display).

Looking to the future there are many developments taking place that move beyond the album and on to the archive. Nowadays we have home media collections – could we end up browsing beautiful ePaper books that access our libraries?Could we even see the day where photographic images have a “birthday” and prompt us to remember things (e.g. do you remember when this image was taken, 3 years ago today?)

Professor Frohlich finished up with some lessons for social media innovation:

  • Innovation results from the interaction of four factors: practice; technology; business; and design.
  • Business positioning and social shaping are as important to innovation as technology and it’s design.
  • Social media evolve over long periods of time (so don’t give up if something doesn’t happen quickly).
  • Features change faster than practices and values (social networking is a partial return to identity – e.g. tagging oneself – and not just about communications).
  • Some ideas come around again (like the stereograph developing into 3D cinema).
  • Infrastructure and standards are increasingly key to success (for example, a standard image size).

I do admit to being in admiration of the Digital Surrey team for organising these events – in my three visits I’ve seen some great speakers. Hopefully, I’ve covered the main points from this event but Andy Piper (@andypiper) sums it up for me in a single tweet:

 

Social Media, the BBC and Jon Jacob (@thoroughlygood at #digitalsurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last month I travelled down to Farnham to see Michael Wu’s talk on the Science of Gamification at Digital Surrey. Despite a hellish journey home*, I enjoyed the evening and met some great people, so I decided to come back again last night for this month’s talk. I may feel like an interloper from “analogue North Bucks” – and it would be fair to ask why I’m at an event for networking amongst the Surrey digerati – but my first two experiences of Digital Surrey have been great, so it looks like I could become a regular, if they’ll have me!

Last night’s talk was from Jon Jacob (@thoroughlygood), a BBC writer, journalist and producer – who was at pains to point out he was speaking on behalf of himself, and not the BBC. Actually, Jon has a post about his own performance, which is worth a read.

I took a lot of notes in his talk, which included his reading test on LBC whilst being constantly heckled by Sandi Toksvig, but I think it was best summarised with these points:

  • Jon has used and shamelessly exploited social media to build a “brand” and pursue a career.
  • Social media is at risk of being taken over by dangerous forces who don’t understand it. Many of us like using it, or tolerate it, but more and more people are using social media, including groups that don’t “get it”. Early adopters need to keep an eye out for:
    • Protest-driven people who know technology, bring together armies of geeks and put together massive project management teams to deliver projects in time and budget.
    • People with a little bit of information – they learn how to use Twitter on a Tuesday afternoon and set up as “social media experts” on Wednesday.
  • Social media is a conversation to tap into for stories and sources. More fundamentally it’s a transaction between the author and their own audience. If we post something on Facebook, implicitly we want attention: if we deny it we’re liars! It’s the same for Twitter – about the actor and the audience – not about how large is the audience…
  • If we listen to a radio programme and don’t like then we won’t listen again… it’s the same for TV… if it’s a bit tired we’ll go elsewhere. If that’s how it works for radio and TV, surely it must be the same for social media?
  • It doesn’t matter how many followers you have, the focus is about copy/editorial, not the medium.
  • The secret to engaging copy is that the personality flows through. Be the same person on the medium and in person. Tap into joy rather than avoid it. Exploit everything about yourself in a good way and turn into something (on a personal level or a corporate level).
  • Social media is nothing more than a distribution method, just as TV and radio are.
  • The thing that excites Jon is coming up with ideas and doing things. Maybe people have ideas and feel a bit frightened. Maybe they have ideas and “marketing” didn’t like something. Clearly there are certain laws to follow but it’s actually quite difficult to be that naughty. It’s hard to bring down governments!
  • We need to tap into people with ideas. Don’t just ask them to write a blog post but inspire them, create a delicate ecosystem, get people enthused. That can’t be bottled or put in a book but we’re missing a trick if we’re selling something and have teams of copywriters – maybe we need to do break out of our boundaries and do something different.

By the way, I found Jon’s talk to be completely engaging (thoroughly good, one might say). I saw some negative comments and sure, maybe he went off in a few seemingly random directions, but at all times I was completely switched on to what he was saying. There’s not too many presentations where I can say that!

*OK, so “hellish” is a slight exaggeration but the Highways Agency did close 5 out of the 6 lanes on the M1 northbound where the M25 filters in, at around 10pm, to lean a ladder up against an overhead gantry. I’m sure the resulting queues were just for their own amusement.

The science of gamification (@mich8elwu at #digitalsurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Gamification.

Gam-if-ic-a-tion.

Based on the number of analysts and “people who work in digital” I’ve seen commenting on the topics this year, “gamification” has to be the buzzword of 2011. So when I saw that Digital Surrey (#digitalsurrey) were running an event on “The Science of Gamification”, I was very interested to make the journey down to Farnham and see what it’s all about.

The speaker was Michael Wu (@mich8elwu) and I was pleased to see that the emphasis was very much on science, rather than the “marketing fluff” that is threatening to hijack the term.  Michael’s slides are embedded below but I’ll elaborate on some of his points in this post.

  • Starting off with the terminology, Michael talked about how people love to play games and hate to work – but by taking gameplay elements and applying them to work, education, or exercise, we can make them more rewarding.
  • Game mechanics are about a system of principles/mechanisms/rules that govern a system of reward with a predictable outcome.
  • The trouble is that people adapt, and game mechanics become less effective – so we look to game dynamics – the temporal evolution and patterns of both the game and the players that make a gamified activity more enjoyable.
  • These game dynamics are created by joining game mechanics (combining ands cascading).
  • Game theory is a branch of mathematics and is nothing to do with gamification!
  • The Fogg Behaviour Modellooks at those factors that influence human behaviour:
    • Motivation – what we want to do.
    • Ability – what we can do.
    • Trigger – what we’re told to do.
  • When all three of these converge, we have action – they key is to increase the motivation and ability, then trigger at an appropriate point. There are many trajectories to reach the trigger (some have motivation but need to develop ability – more often we have some ability but need to develop motivation – but there is always an activation threshold over which we must be driven before the trigger takes effect).
  • Abraham Maslow’s Hierarchy of Needs is an often-quoted piece of research and Michael Wu draws comparisons between Maslow’s deficiency needs (physical, safety, social/belonging and esteem) and game mechanics/dynamics. At the top of the hierarchy is self-actualisation, with many meta-motivators for people to act.
  • Dan Pink’s Drive discusses intrinsic motivators of autonomy, mastery and purpose leading to better performance and personal satisfaction.  The RSA video featuring Dan Pink talking about what motivates us wasn’t used in Michael Wu’s talk, but it’s worthy of inclusion here anyway:

  • In their research, John Watson and BF Skinner looked at how humans learn and are conditioned.  A point system can act as a motivator but the points themselves are not inherently rewarding – their proper use (a reward schedule) is critical.
  • Reward schedules include fixed interval; fixed interval and fixed ratio; variable interval; and variable ratio – each can be applied differently for different types of behaviour (i.e. driving activity towards a deadline; training; re-enforcing established behaviours; and maintaining behaviour).
  • Mihaly Csikszentmihalyi is famous for his theories on flow: an optimal state of intrinsic motivation where one forgets about their physical feelings (e.g. hunger), the passage of time, and ego; balancing skills with the complexity of a challenge.
  • People love control, hate boredom, are aroused by new challenges but get anxious if a task is too difficult (or too easy) and work is necessary to balance challenges with skills to achieve a state of flow. In reality, this is a tricky balance.
  • Having looked at motivation, Michael Wu spoke of the two perspectives of ability: the user perspective of ability (reality) and the task perspective of simplicity (perceptual).
  • To push a “user” beyond their activation threshold there is a hard way (increase ability by motivating them to train and practice) or an easy way (increase the task’s perceived simplicity or the user’s perceived ability).
  • Simplicity relies on resources and simple tasks cannot use resources that we don’t have.  Simplicity is a measure of access to three categories of resource at the time when a task is to be performed: effort (physical or mental); scarce resources (time, money, authority/permission, attention) and adaptability (capacity to break norms such as personal routines, social, behavioural or cultural norms).
  • Simplicity is dependant upon the access that individuals have to resources as well as time and context – i.e. resources can become inaccessible (e.g. if someone is busy doing something else). Resources are traded off to achieve simplicity (motivation and ability can also be traded).
  • A task is perceived to be simple if it can be completed it with fewer resources than we expect (i.e. we expect it to be harder) and some game mechanics are designed to simplify tasks.
  • Triggers are prompts that tell a user to carry out the target behaviour right now. The user must be aware of the trigger and understand what it means. Triggers are necessary because we may not be aware of our abilities, may be hesitant (questioning motivation) or may be distracted (engaged in another activity).
  • Different types of triggers may be used depending on behaviour. For example, a spark trigger is built in to the motivational mechanism; a facilitator highlights simplicity or progress; and signals are used as reminders when there is sufficient motivation and no requirement to simplify as task.
  • Triggers are all about timing, and Richard Bartle‘s personality types show which are the most effective triggers. Killers are highly competitive and need to be challenged; socialisers are triggered by seeing something that their friends are doing; achievers may be sparked by a status increase; and explorers are triggered by calls on their unique skills, without any time pressure. Examples of poorly timed triggers include pop-up adverts and spam email.
  • So gamification is about design to drive action: providing feedback (positive, or less effectively, negative); increasing true or perceived ability; and placing triggers in the behavioural trajectory of motivated players where they feel able to react.
  • If the desired behaviour is not performed, we need to check: are they triggered? Do they have the ability (is the action simple enough)? Are they motivated?
  • There is a moral hazard to avoid though – what happens if points (rather than desired behaviour) become the motivator and then the points/perks are withdrawn?  A case study of this is Gap’s attempt to gamify store check-ins on Facebook Places with a free jeans giveaway. Once the reward had gone, people stopped checking in.
  • More effective was a Fun Theory experiment to reduce road speeds by associating it with a lottery (in conjunction with Volkswagen). People driving below the speed limit were photographed and entered into a lottery to win money from those who were caught speeding (and fined).

  • Michael Wu warns that gamification shouldn’t be taken too literally though: in another example, a company tried incentivising sales executives to record leads with an iPad/iPhone golf game. They thought it would be fun, and therefore motivational but it actually reduced the ability to perform (playing a game to record a lead) and there was no true convergence of the three factors to influence behaviour.
  • In summary:
    • Gamification is about driving players above the activation threshold by motivating them (with positive feedback), increasing their ability (or perceived ability) and then applying the roper trigger at the right time.
    • The temporal convergence of motivation, ability and trigger is why gamification is able to manipulate human behaviour.
    • There are moral hazards to avoid (good games must adapt and evolve with players to bring them into a state of flow).

I really enjoyed my evening at Digital Surrey – I met some great people and Michael Wu’s talk was fascinating. And then, just to prove that this really is a hot topic, The Fantastic Tavern (#tftlondon) announced today that their next meeting will also be taking a look at gamification

Further reading/information

[Update 23:12 – added further links in the text]