Category: Technology

  • The science of gamification (@mich8elwu at #digitalsurrey)

    Gamification.

    Gam-if-ic-a-tion.

    Based on the number of analysts and “people who work in digital” I’ve seen commenting on the topics this year, “gamification” has to be the buzzword of 2011. So when I saw that Digital Surrey (#digitalsurrey) were running an event on “The Science of Gamification”, I was very interested to make the journey down to Farnham and see what it’s all about.

    The speaker was Michael Wu (@mich8elwu) and I was pleased to see that the emphasis was very much on science, rather than the “marketing fluff” that is threatening to hijack the term.  Michael’s slides are embedded below but I’ll elaborate on some of his points in this post.

    • Starting off with the terminology, Michael talked about how people love to play games and hate to work – but by taking gameplay elements and applying them to work, education, or exercise, we can make them more rewarding.
    • Game mechanics are about a system of principles/mechanisms/rules that govern a system of reward with a predictable outcome.
    • The trouble is that people adapt, and game mechanics become less effective – so we look to game dynamics – the temporal evolution and patterns of both the game and the players that make a gamified activity more enjoyable.
    • These game dynamics are created by joining game mechanics (combining ands cascading).
    • Game theory is a branch of mathematics and is nothing to do with gamification!
    • The Fogg Behaviour Modellooks at those factors that influence human behaviour:
      • Motivation – what we want to do.
      • Ability – what we can do.
      • Trigger – what we’re told to do.
    • When all three of these converge, we have action – they key is to increase the motivation and ability, then trigger at an appropriate point. There are many trajectories to reach the trigger (some have motivation but need to develop ability – more often we have some ability but need to develop motivation – but there is always an activation threshold over which we must be driven before the trigger takes effect).
    • Abraham Maslow’s Hierarchy of Needs is an often-quoted piece of research and Michael Wu draws comparisons between Maslow’s deficiency needs (physical, safety, social/belonging and esteem) and game mechanics/dynamics. At the top of the hierarchy is self-actualisation, with many meta-motivators for people to act.
    • Dan Pink’s Drive discusses intrinsic motivators of autonomy, mastery and purpose leading to better performance and personal satisfaction.  The RSA video featuring Dan Pink talking about what motivates us wasn’t used in Michael Wu’s talk, but it’s worthy of inclusion here anyway:

    • In their research, John Watson and BF Skinner looked at how humans learn and are conditioned.  A point system can act as a motivator but the points themselves are not inherently rewarding – their proper use (a reward schedule) is critical.
    • Reward schedules include fixed interval; fixed interval and fixed ratio; variable interval; and variable ratio – each can be applied differently for different types of behaviour (i.e. driving activity towards a deadline; training; re-enforcing established behaviours; and maintaining behaviour).
    • Mihaly Csikszentmihalyi is famous for his theories on flow: an optimal state of intrinsic motivation where one forgets about their physical feelings (e.g. hunger), the passage of time, and ego; balancing skills with the complexity of a challenge.
    • People love control, hate boredom, are aroused by new challenges but get anxious if a task is too difficult (or too easy) and work is necessary to balance challenges with skills to achieve a state of flow. In reality, this is a tricky balance.
    • Having looked at motivation, Michael Wu spoke of the two perspectives of ability: the user perspective of ability (reality) and the task perspective of simplicity (perceptual).
    • To push a “user” beyond their activation threshold there is a hard way (increase ability by motivating them to train and practice) or an easy way (increase the task’s perceived simplicity or the user’s perceived ability).
    • Simplicity relies on resources and simple tasks cannot use resources that we don’t have.  Simplicity is a measure of access to three categories of resource at the time when a task is to be performed: effort (physical or mental); scarce resources (time, money, authority/permission, attention) and adaptability (capacity to break norms such as personal routines, social, behavioural or cultural norms).
    • Simplicity is dependant upon the access that individuals have to resources as well as time and context – i.e. resources can become inaccessible (e.g. if someone is busy doing something else). Resources are traded off to achieve simplicity (motivation and ability can also be traded).
    • A task is perceived to be simple if it can be completed it with fewer resources than we expect (i.e. we expect it to be harder) and some game mechanics are designed to simplify tasks.
    • Triggers are prompts that tell a user to carry out the target behaviour right now. The user must be aware of the trigger and understand what it means. Triggers are necessary because we may not be aware of our abilities, may be hesitant (questioning motivation) or may be distracted (engaged in another activity).
    • Different types of triggers may be used depending on behaviour. For example, a spark trigger is built in to the motivational mechanism; a facilitator highlights simplicity or progress; and signals are used as reminders when there is sufficient motivation and no requirement to simplify as task.
    • Triggers are all about timing, and Richard Bartle‘s personality types show which are the most effective triggers. Killers are highly competitive and need to be challenged; socialisers are triggered by seeing something that their friends are doing; achievers may be sparked by a status increase; and explorers are triggered by calls on their unique skills, without any time pressure. Examples of poorly timed triggers include pop-up adverts and spam email.
    • So gamification is about design to drive action: providing feedback (positive, or less effectively, negative); increasing true or perceived ability; and placing triggers in the behavioural trajectory of motivated players where they feel able to react.
    • If the desired behaviour is not performed, we need to check: are they triggered? Do they have the ability (is the action simple enough)? Are they motivated?
    • There is a moral hazard to avoid though – what happens if points (rather than desired behaviour) become the motivator and then the points/perks are withdrawn?  A case study of this is Gap’s attempt to gamify store check-ins on Facebook Places with a free jeans giveaway. Once the reward had gone, people stopped checking in.
    • More effective was a Fun Theory experiment to reduce road speeds by associating it with a lottery (in conjunction with Volkswagen). People driving below the speed limit were photographed and entered into a lottery to win money from those who were caught speeding (and fined).

    • Michael Wu warns that gamification shouldn’t be taken too literally though: in another example, a company tried incentivising sales executives to record leads with an iPad/iPhone golf game. They thought it would be fun, and therefore motivational but it actually reduced the ability to perform (playing a game to record a lead) and there was no true convergence of the three factors to influence behaviour.
    • In summary:
      • Gamification is about driving players above the activation threshold by motivating them (with positive feedback), increasing their ability (or perceived ability) and then applying the roper trigger at the right time.
      • The temporal convergence of motivation, ability and trigger is why gamification is able to manipulate human behaviour.
      • There are moral hazards to avoid (good games must adapt and evolve with players to bring them into a state of flow).

    I really enjoyed my evening at Digital Surrey – I met some great people and Michael Wu’s talk was fascinating. And then, just to prove that this really is a hot topic, The Fantastic Tavern (#tftlondon) announced today that their next meeting will also be taking a look at gamification

    Further reading/information

    [Update 23:12 – added further links in the text]

     

  • Some thoughts on modern communications and avoiding the “time-Hoover”

    Last week I was reading an article by Shelley Portet looking at how poor productivity and rudeness are influenced by technology interruptions at work. As someone who’s just managed to escape from email jail yet again (actually, I’m on parole – my inbox may finally be at zero but I still have hundreds of items requiring action) I have to say that, despite all the best intentions, experience shows that I’m a repeat offender, an habitual email mis-manager – and email is just the tip of the proverbial iceberg.

    Nowadays email is just one of many forms of communication: there’s instant messaging; “web 2.0″ features on intranet sites (blogs, wikis, discussion forums); our internal social networking platform; business and personal use of external social networks (Twitter, LinkedIn, Slideshare, YouTube, Facebook… the list goes on) – so how can we prepare our knowledge workers for dealing with this barrage of interruptions?

    There are various schools of thought on email management and I find that Merlin Mann’s Inbox Zero principles work well (see this video from Merlin Mann’s Google Tech Talk using these slides on action-based email – or flick through the Michael Reynolds version for the highlights), as long as one always manages to process to zero (that’s the tricky part that lands me back in email jail).

    The trouble is that Inbox Zero only deals with the manifestation of the problem, not the root cause: people. Why do we send these messages? And how do we act on them?

    A couple of colleagues have suggested recently that the trouble with email is that people confuse sending an email with completing an action as if, somehow, the departure of the message from someone’s outbox on its way to someone else’s inbox implies a transfer of responsibility. Except that it doesn’t – there are many demands on my colleagues’ time and it’s up to me to ensure that we’re all working towards a common goal. I can help by making my expectations clear; I can think carefully before carbon copying or replying to all; I can make sure I’m brief and to the point (but not ambiguous) – but those are all items of email etiquette. They don’t actually help to reduce the volumes of messages sent and received. Incidentally, I’m using email as an example here – many of the issues are common whatever the communications medium (back to handwritten letters and typed memos as well as forwards to social networking) but, ultimately I’m either trying to:

    • Inform someone that something has happened, will soon happen, or otherwise communicate something on a need to know basis.
    • Request that someone takes an action on something.
    • Confirm something that has been agreed via another medium (perhaps a telephone call), often for audit purposes.

    I propose two courses of action, both of which involve setting the expectations of others:

    1. The first is to stop thinking of every message as requiring a response. Within my team at work, we have some unwritten rules that: gratefulness is implied within the team (not to fill each others’ inboxes with messages that say “thank you”); carbon copy means “for information”; and single-line e-mails can be dealt with in the subject heading.
    2. The second can be applied far more widely and that’s the concept of “service level agreements” for corporate communications. I don’t mean literally, of course, but regaining productivity has to be about controlling the interruptions. I suggest closing Outlook. Think of it as an email/calendar client – not the place in which to spend one’s day – and the “toast” that pops up each time a message arrives is a distraction. Even having the application open is a distraction. Dip in 3 times a day, 5 times a day, every hour, or however often is appropriate but emails should not require nor expect an immediate response. Then there’s instant messaging: the name “instant” suggests the response time but presence is a valuable indicator – if my presence is “busy”, then I probably am. Try to contact me if you like but don’t be surprised if I ignore it until a better time. Finally, social networking: which is both a great aid to influencing others and to keeping abreast of developments but can also be what my wife would call a “time-Hoover” – so don’t even think that you can read every message – just dip in from time to time and join the conversation, then leave again.

    Ultimately, neither of these proposals will be successful without cultural change. This issue is not unique to any one company but the only way I can think of to change the actions and/or thoughts of others is to lead by example… starting today, I think I might give them a try.

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

  • An alternative enterprise desktop

    Earlier this week, I participated in an online event that looked at the use of Linux (specifically Ubuntu) as an alternative to Microsoft Windows on the enterprise desktop.

    It seems that every year is touted as the year of Linux on the desktop – so why hasn’t it happened yet? Or maybe 2011 really is the year of Linux on the desktop and we’ll all be using Google Chrome OS soon. Somehow I don’t think so.

    You see, the trouble with any of the “operating system wars” arguments is that they miss the point entirely. There is a trilogy of people, process and technology at stake – and the operating system is just one small part of one of those elements. It’s the same when people start to compare desktop delivery methods – thick, thin, virtualised, whatever – it’s how you manage the desktop it that counts.

    From an end user perspective, many users don’t really care whether their PC runs Windows, Linux, or whatever-the-next-great-thing-is. What they require (and what the business requires – because salaries are expensive) is a system that is “usable”. Usability is in itself a subjective term, but that generally includes a large degree of familiarity – familiarity with the systems that they use outside work. Just look at the resistance to major user interface changes like the Microsoft Office “ribbon” – now think what happens when you change everything that users know about using a PC. End users also want something that works with everything else they use (i.e. an integrated experience, rather than jumping between disparate systems). And, for those who are motivated by the technology, they don’t want to feel that there is a two tier system whereby some people get a fully-featured desktop experience and others get an old, cascaded PC, with a “light” operating system on it.

    From an IT management standpoint, we want to reduce costs. Not just hardware and software costs but the costs of support (people, process and technology). A “free” desktop operating system is just a very small part of the mix; supporting old hardware gets expensive; and the people costs associated with major infrastructure deployments (whether that’s a virtual desktop or a change of operating system) can be huge. Then there’s application compatibility – probably the most significant headache in any transformation. Yes, there is room for a solution that is “fit for purpose” and that may not be the same solution for everyone – but it does still need to be manageable – and it needs to meet all of the organisation’s requirements from a governance, risk and compliance perspective.

    Even so, the days of allocating a Windows PC to everyone in an effort to standardise every single desktop device are starting to draw to a close. IT consumerisation is bringing new pressures to the enterprise – not just new device classes but also a proliferation of operating system environments. Cloud services (for example consuming software as a service) are a potential enabler – helping to get over the hurdles of application compatibility by boiling everything down to the lowest common denominator (a browser). The cloud is undoubtably here to stay and will certainly evolve but even SaaS is not as simple as it sounds with multiple browser choices, extensions, plug-ins, etc. If seems that, time and time again, it’s the same old legacy applications (generally specified by business IT functions, not corporate IT) that make life difficult and prevent the CIO from achieving the utopia that they seek.

    2011 won’t be the year of Linux on the desktop – but it might just be the year when we stopped worrying about standardisation so much; the year when we accepted that one size might not fit all; and the year when we finally started to think about applications and data, rather than devices and operating systems.

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

  • Changing the primary email address for Office 365 users

    In my recent post about configuring DNS for Office 365, I mentioned that Microsoft creates mailboxes in the form of user@subdomain.onmicrosoft.com.  I outlined the steps for adding so-called “vanity” domains, after which additional (proxy) email addresses can be specified but any outbound messages will still be sent from the onmicrosoft.com address (at least, that’s what’s used in the beta – I guess that may change later in the product’s lifecycle).

    It is possible to change the primary address for a user (e.g. I send mail using an address on the markwilson.co.uk domain) but it does require the use of PowerShell.  Time to roll up your sleeves and prepare to go geek!

    Connecting to Office 365 from Windows PowerShell

    I was using a Windows 7 PC so I didn’t need to update any components (nor do Windows Server 2008 R2 users); however Windows XP SP3, Server 2003 SP2, Server 2008 SP1 or SP2 and Vista SP1 users will need to make sure they have the appropriate versions of Windows Powershell and Windows Remote Management installed.

    Once PowerShell v2 and WinRM 2.0 are installed, the steps to connect to Office 365 were as follows:

    Prompt for logon credentials and supply the appropriate username and password:

    $LiveCred = Get-Credential

    Create a new session:

    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection

    Import the session to the current PowerShell console:

    Import-PSSession $Session

    At this point, the session import failed for me because script execution was disabled on my machine. That was corrected using Set-ExecutionPolicy -ExecutionPolicy unrestricted (although that’s not a great idea – it would be better to use a lower level of access) – I also had to run PowerShell as an administrator to successfully apply that command.

    Once scripts were enabled, I was able to import the session.

    List the current mailbox addresses

    It’s possible that a mailbox may have a number of proxy addresses already assigned, so this is the code that I used to list them:

    $Mailbox = Get-Mailbox -Identity Mark-Wilson
    $Mailbox.EmailAddresses

    If you want to format the list of mailboxes as a single comma-separated line, then this might help:

    ForEach ($i in $Mailbox.EmailAddresses) {Write-Host $i -NoNewline “`b, “}

    (the `b is a backspace escape character.)

    Set the primary email address

    The primary email address is shown using an upper case SMTP: prefix whereas proxy addresses use a lower case smtp: prefix.

    To change the primary email address, it’s necessary to reset all addresses on the mailbox with the Set-Mailbox cmdlet.  This is where some copying/pasting of the output from the previous command may help:

    Set-Mailbox Mark-Wilson -EmailAddresses SMTP:mark@markwilson.co.uk,smtp:mark-wilson@markwilson.onmicrosoft.com,smtp:mark@markwilson.it

    Disconnect the session from Office365

    Once all changes have been made, it’s good practice to break down the session again:

    Remove-PSSession $Session

  • Configuring DNS for Exchange Online in Office 365

    Readers who follow me on Twitter (@markwilsonit) may have noticed that I was in a mild state of panic last night when I managed to destroy the DNS for markwilson.co.uk.  They might also have seen this website disappear for a few hours until I managed to get things up and running again. So, what was I doing?

    I’ve been using Google Apps for a couple of years now but I’ve never really liked it – Docs lacks functionality that I have become used to in Microsoft Office and Mail, though powerful, has a pretty poor user interface (a subjective view of course – I know some people love it).  When Microsoft announced Office 365 I was keen to get on the beta, and I was fortunate enough to be accepted early in the programme.  Unfortunately, at that time, the small business (P1) plan didn’t allow the use of “vanity domains” (what exactly is vain about having your own domain name? I call it professionalism!) so I waited until I was accepted onto the enterprise (E3) beta. Then I realised that moving my mail to another platform was not a trivial exercise and, by the time I got around to it several weeks had gone by and it is now possible to have vanity domains on a small business plan!

    Anyway, I digress: migrating to Office 365, how was it? Well, first up, I should highlight that the DNS issues I had were nothing to do with Microsoft – and, without those issues, everything would have been pretty simple actually.

    Microsoft provides a portal to administer Office 365 accounts and this also allows access to the Exchange Online, Lync Online and SharePoint Online components.  In that regard, it’s not dissimilar to Google Apps – just a lot more pleasant to use. So far, I’ve concentrated on the Exchange Online and Outlook Web App components – I’ll probably blog about some of the other Office 365 components as I start to use them.

    The e-mail address that Microsoft gave me for my initial mailbox is in the form of user@subdomain.onmicrosoft.com. That’s not much use to me, so I needed to add a domain to the account which involves adding the domain, verifying it (by placing a CNAME record in the DNS for the appropriate domain – using a code provided by Microsoft, resolving to ps.microsoftonline.com.) and then, once verified, configuring the appropriate DNS records. In my case that’s:

    markwilson.co.uk. 3600 IN MX 0 markwilson-co-uk.mail.eo.outlook.com.
    autodiscover 3600 IN CNAME autodiscover.outlook.com.
    markwilson.co.uk. 3600 IN TXT “v=spf1 include:outlook.com ~all”

    These are for Exchange – there are some additional records for Lync but they show how external domain names are represented inside Office 365.

    [Update 17 June 2011: The DNS entries for Lync are shown below]

    SRV _sip _tls 443 1 100 sipdir.online.lync.com. markwilson.co.uk 3600
    SRV _sipfederationtls _tcp 5061 1 100 sipfed.online.lync.com. markwilson.co.uk 3600

    The . on the end of the names and the quotes on the TXT record are important – without the . the name resolution will not work correctly and I think it was a lack of " " that messed up my DNS when I added the record using the cPanel WebHost Manager (WHM), although I haven’t confirmed that.

    With the domain configured, additional email addresses may be added to user accounts and, once DNS propagation has taken place, mail should start to flow.

    Before I sign off, there are a few pieces of advice to highlight:

    • After I got everything working on the Office 365 Enterprise (E3) plan, I realised that I’d be better off using the Small Business (P1) plan. This wasn’t a simple subscription choice (I hope it will be in the final product – at the time of writing Office 365 is still in beta) and it involved me removing my “vanity” domains from all user objects, distribution groups, contacts and aliases, then removing the domain from Office 365, and finally going through the process of adding it using a different Microsoft Online account.
    • Before making DNS changes, it’s worthwhile tuning DNS settings to reduce the time to live (TTL) to speed up the DNS propagation process by reducing the time that records are stored in others’ DNS caches.
    • Microsoft TechNet has some useful advice for checking DNS MX record configurations with nslookup.exe but Simon Bisson pointed me in the direction of the Microsoft Exchange Remote Connectivity Analyzer, which is a great resource for checking Exchange ActiveSync, Exchange Web Services and Office Outlook connectivity as well as inbound and outbound SMTP email.
    • Microsoft seems to have decided that, whilst enterprises can host their DNS externally, small businesses need to host their DNS on Microsoft’s name servers (and use a rather basic web interface to manage it).  I’m hoping that decision will change (and I’m led to believe that it’s still possible to host the DNS elsewhere, as long as the appropriate entries are added, although that is an unsupported scenario) – I’m trying that approach with another domain that I own and I may return to the topic in a future blog post.

    Now I have my new mailbox up and running, I just need to work out how to shift 3GB of email from Google Apps to Exchange Online!

  • Changing text colours and fonts in a PowerPoint theme

    I spent a good chunk of yesterday afternoon working on a presentation that I need to deliver tomorrow.  We have a corporate presentation template but it only really covers the basics (background, standard fonts, etc.).  I wanted to change the colour of hyperlinks (because the default blue/purple, depending on whether the link has been clicked or not, did not fit well with the other colours on my slides) but I couldn’t work out how until I stumbled across a blog post from Springhouse Education and Consulting Services.

    It seems that the secret sauce is:

    1. Go to the Design tab
    2. In the top right of the Themes grouping, click on the Colors down arrow
    3. Select Create New Theme Colors located at the bottom of the list
    4. Select the desired color for the Hyperlink as well as the Followed Hyperlink

    [Springhouse Education and Consulting Services]

    Whilst I was messing around with font colours, I also added a new font to the theme (we have a corporate typeface that’s used in our external communications but not normally used internally). To do that:

    1. Go to the Design tab
    2. In the top right of the Themes grouping, click on the Fonts dropdown
    3. Select Create New Theme Fonts located at the bottom of the list
    4. Select the desired fonts for the Headings as well as the Body

    These instructions are for PowerPoint 2007 but I’m sure the process is similar in PowerPoint 2010.

  • Does Microsoft Kinect herald the start of a sensor revolution?

    Last week, Microsoft officially announced a software development kit for the Kinect sensor. Whilst there’s no commercial licensing model yet, that sounds like it’s the start of a journey to official support of gesture-based interaction with Windows PCs.

    There’s little doubt that Kinect, Microsoft’s natural user interface for the Xbox game console, has been phenomenally successful. It’s even been recognised as the fastest-selling consumer device on record by Guinness World Records. I even bought one for my family (and I’m not really a gamer) – but before we go predicting the potential business uses for this technology, it’s probably worth stopping and taking stock. Isn’t this really just another technology fad?

    Kinect is not the first new user interface for a computer – I’ve written so much about touch-screen interaction recently that even I’m bored of hearing about tablets! We can also interact with our computers using speech if we choose to – and the keyboard and mouse are still hanging on in there too (in a variety of forms). All of these technologies sound great, but they have to be applied at the right time: my iPad’s touch screen is great for flicking through photos, but an external keyboard is better for composing text; Kinect is a fantastic way to interact with games but, frankly, it’s pretty poor as a navigational tool.

    What we’re really seeing here is a proliferation of sensors. Keyboard, mouse, trackpad, microphone, and camera(s), GPS, compass, heart monitor – the list goes on. Kinect is really just an advanced, and very consumable, sensor.

    Interestingly sensors typically start out as separate peripherals and, over time, they become embedded into devices. The mouse and keyboard morphed into a trackpad and a (smaller) keyboard. Microphones and speakers were once external but are now built in to our personal computers. Our smartphones contain a wealth of sensors including GPS, cameras and more. Will we see Kinect built into PCs? Quite possibly – after all it’s really a couple of depth sensors and a webcam!

    What’s really exciting is not Kinect per se but what it represents: a sensor revolution. Much has been written about the Internet of Things but imagine a dynamic network of sensors where the nodes can automatically handle re-routing of messages based on an awareness of the other nodes. Such networks could be quickly and easily deployed (perhaps even dropped from a plane) and would be highly resilient to accidental or deliberate damage because of their “self-healing” properties. Another example of sensor use could be in an agricultural scenario with sensors automatically monitoring the state of the soil, moisture, etc. and applying nutrients or water. We’re used to hearing about RFID tags in retail and logistics but those really are just the tip of the iceberg.

    Exciting times…

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was jointly authored with Ian Mitchell.]

  • Azure Connect – the missing link between on-premise and cloud

    Azure Connect offers a way to connect on-premise infrastructure with Windows Azure but it’s lacking functionality that may hinder adoption.

    While Microsoft is one of the most dominant players in client-server computing, until recently, its position in the cloud seemed uncertain.  More recently, we’ve seen Microsoft lay out its stall with both Software as a Service (SaaS) products including Office 365 and Platform as a Service (PaaS) offerings such as Windows Azure joining their traditional portfolio of on-premise products for consumers, small businesses and enterprise customers alike.

    Whereas Amazon’s Elastic Compute Cloud (EC2) and Simple Storage Service (S3) offer virtualised Infrastructure as a Service (IaaS) and Salesforce.com is about consumption of Software as a Service (SaaS), Windows Azure fits somewhere in between. Azure offers compute and storage services, so that an organisation can take an existing application, wrap a service model around it and specify how many instances to run, how to persist data, etc.

    Microsoft also provides middleware to support claims based authentication and an application fabric that allows simplified connectivity between web services endpoints, negotiating firewalls using outbound connections and standard Internet protocols. In addition, there is a relational database component (SQL Azure), which exposes relational database services for cloud consumption, in addition to the standard Azure table storage.

    It all sounds great – but so far everything I’ve discussed runs on a public cloud service and not all applications can be moved in their entirety to the cloud.

    Sometimes makes it makes sense to move compute operations to the cloud and keep the data on-premise (more on that in a moment). Sometimes, it’s appropriate to build a data hub with multiple business partners connecting to a data source in cloud but with applications components in a variety of locations.

    For European CIOs, information security, in particular data residency, is a real issue. I should highlight that I’m not a legal expert, but CIO Magazine recently reported how the Patriot Act potentially gives the United States authorities access to data hosted with US-based service providers – and selecting a European data centre won’t help.  That might make CIOs nervous about placing certain types of data in the cloud although they might consider a hybrid cloud solution.

    Azure already provides federated security, application layer connectivity (via AppFabric) and some options for SQL Azure data synchronisation (currently limited to synchronisation between Microsoft data centres, expanding later this year to include synchronisation with on-premise SQL Server) but the missing component has been the ability to connect Windows Azure with on-premise infrastructure and applications. Windows Azure Connect provides this missing piece of the jigsaw.

    Azure Connect is a new component for Windows Azure that provides secure network communications between compute instances in Azure and servers on premise (ie behind the corporate firewall). Using standard IP protocols (both TCP and UDP) it’s possible to take a web front end to the cloud and leave the SQL Server data on site, communicating over a virtual private network, secured with IPSec. In another scenario, a compute instance can be joined to an on-premise Active Directory  domain so a cloud-based application can take advantage of single sign-on functionality. IT departments can also use Azure Connect for remote administration and troubleshooting of cloud-based computing instances.

    Currently in pre-release form, Microsoft is planning to make Azure Connect available during the first half of 2011. Whilst setup is relatively simple and requires no coding, Azure Connect is reliant on an agent running on the connected infrastructure (ie on each server that connects to Azure resources) in order to establish IPSec connectivity (a future version of Azure Connect will be able to take advantage of other VPN solutions). Once the agent is installed, the server automatically registers itself with the Azure Connect relay in the cloud and network policies are defined to manage connectivity. All that an administrator has to do is to enable Windows Azure roles for external connectivity via the service model; enable local computers to initiate an IPSec connection by installing the Azure Connect agent; define network policies and, in some circumstances, define appropriate outbound firewall rules on servers.

    The emphasis on simplicity is definitely an advantage as many Azure operations seem to require developer knowledge and this is definitely targeted at Windows Administrators. Along with automatic IPSec provisioning (so no need for certificate servers) Azure Connect makes use of DNS so that there is no requirement to change application code (the same server names can be used when roles move between the on premise infrastructure and Azure).

    For some organisations though, the presence of the Azure Connect agent may be seen as a security issue – after all, how many database servers are even Internet-connected? That’s not insurmountable but it’s not the only issue with Azure Connect.

    For example, connected servers need to run Windows Vista, 7, Server 2008, or Server 2008 R2 [a previous version of this story erroneously suggested that only Windows Server 2008 R2 was supported] and many organisations will be running their applications on older operating system releases. This means that there may be server upgrade costs to consider when integrating with the cloud – and it certainly rules out any heterogeneous environments.

    There’s an issue with storage. Windows Azure’s basic compute and storage services can make use of table-based storage. Whilst SQL Azure is available for applications that require a relational database, not all applications have this requirement – and SQL Azure presents additional licensing costs as well as imposing additional architectural complexity.  A significant number of cloud-based applications make use of table storage or combination of table storage and SQL Server – for them, the creation of a hybrid model for customers that rely on on-premise data storage may not be possible.

    For many enterprises, Azure Connect will be a useful tool in moving applications (or parts of applications) to the cloud. If Microsoft can overcome the product’s limitations, it could represent a huge step forward for Microsoft’s cloud services in that it provides a real option for development of hybrid cloud solutions on the Microsoft stack, but there still some way to go.

    [This post was originally written as an article for Cloud Pro.]

  • First signs of a tablet strategy at Microsoft

    I’ve been pretty critical of Microsoft’s tablet strategy. As recently as last October they didn’t appear to have one and Steve Ballmer publicly ridiculed customers using a competitor devices. Whenever I mentioned this, the ‘softies would switch into sales mode and say something like “oh but we’re the software company, we don’t make devices” to which I’d point out that they do have a mobile operating system (Windows Phone 7), and an application store, but that they don’t allow OEMs to use it on a tablet form factor.

    But it seems that things are changing in Redmond. Or in Reading at least.

    Ballmer got a kicking from the board (deservedly so) for his inability to develop Microsoft’s share of the mobile market and it seems that Redmond is open to ideas from elsewhere in the company to develop a compelling Windows-based tablet offering. A few days ago, I got the chance to sit down with one of the Slate vTeam in the UK subsidiary to discuss Microsoft’s tablet (they prefer “slate”) strategy and it seems that there is some progress being made.

    Whilst Windows 8 (or Windows vNext as Microsoft prefer to refer to it) was not up for discussion, Microsoft’s Jamie Burgess was happy to discuss the work that Microsoft is doing around slates that run Windows 7.  Ballmer alluded to work with OEMs in his “big buttons” speech and there are a number of devices hitting the market now which attempt to overcome the limitations of Microsoft’s platform. The biggest limitation is the poor touch interface provided by the operating system itself (with issues that are far more fundamental than just “big buttons”).  There seems little doubt that the next version of Windows will have better slate support but we won’t see that until at least 2012 – and what about the current crop of Windows 7-based devices?

    [At this point I need to declare a potential conflict of interest – I work for Fujitsu, although this is my personal blog and nothing written here should be interpreted as representing the views of my employer. For what it’s worth, I have been just as critical of Windows slates when talking to Fujitsu Product Managers but, based on a recent demonstration of a pre-production model, I do actually believe that they have done a good job with the Stylistic Q550, especially when considering the current state of Windows Touch]

    Need to do “something”

    Microsoft has realised that doing nothing about slates does not win market share – in fact it loses mind share – every iPad sold helps Apple to grow because people start using iTunes, then they buy into other parts of the Apple ecosystem (maybe a Mac), etc.

    Noting that every enterprise user is also a consumer, Microsoft believes enterprise slates will sneak back into the home, rather than consumer devices becoming commonplace in the enterprise. That sounds like marketing spin to me, but they do have a point that there is a big difference between a CIO who wants to use his iPad at work and that same CIO saying that he wants 50,000 of those devices deployed across the organisation.

    Maybe it was because I was talking to the UK subsidiary, rather than “Corp” but Microsoft actually seems to acknowledge that Apple is currently leading on tablet adoption. Given their current position in the market, Microsoft’s strategy is to leverage its strength from the PC marketplace – the partner ecosystem. Jamie Burgess told me how they are working to bring together Independent Software Vendors (ISVs), System Integrators (SIs) and device manufacturers (OEMs) to create “great applications” running on “great devices” and deployed by “great partners”, comparing this with the relatively low enterprise maturity of Apple and their resellers.

    Addressing enterprise readiness

    I could write a whole post on the issues that Google has (even if they don’t yet know it) with Android: device proliferation is a wonderful thing, until you have to code for the lowest common denominator (just ask Microsoft, with Windows Mobile – and, to some extent with Windows too!) and Google is now under attack for its lack of openness in an open-source product. But the big issue for the enterprise is security – and I have to agree with Microsoft on this: neither Apple nor Google seem to have got that covered. Here are some examples:

    • Encryption is only as strong as its weakest link – root access to devices (such as jailbroken iPhones) is pretty significant (6 minutes to break into an encrypted device) and Apple has shown time and time again that it is unable to address this, whilst Google sees this level of access to Android devices as part of its success model.
    • And what if I lose my mobile device? USB attached drives provide a great analogy in that encryption (e.g. Microsoft BitLocker) is a great insurance policy – you don’t think you really need it until a device goes missing and you realise that no-one can get into it anyway – then you breathe a big sigh of relief.

    After security we need to think about management and support:

    • Android 3 and iOS have limited support for device lock down whilst a Windows device has thousands of group policy settings. Sure, group policy is a nightmare in itself, but it is at least proven in the enterprise.
    • Then there’s remote support – I can take screenshots on my iPad, but I can’t capture video and send it to a support technician to show them an application issue that they are having trouble replicating – Windows 7’s problem steps recorder allows me to do this.
    • There is no support for multiple users, so I can’t lock a device down for end users, but open up access for administrators to support the device – or indeed allow a device to be shared between users in any way that provides accountability.

    Windows 7 has its problems too: it’s a general purpose operating system, that’s not designed to run on mobile hardware; it lacks the ability to instantly resume from standby; and touch support (particularly the soft keyboard) is terrible (unless an application is written specifically to use touch) Even so, when you consider its suitability for enterprise use, it’s clear that Windows does have some advantages.

    Ironically, Microsoft also cites a lack of file system access as restricting the options for collaboration using an iOS device. Going back to the point about security only being as strong as the weakest link, I’d say that restricting access to the file system is a good thing (if only there weren’t the jailbreak issues at a lower level!). Admittedly, it does present some challenges for users but applications such as Dropbox help me over that as I can store data within the app, such as presentations for offline playback.

    The Windows Optimised Desktop

    At this point, Jamie came back to the Windows Optimised Desktop message – he sees Windows’ strength as:

    “The ability for any user to connect using any endpoint at any time of day to do their day job successfully but be managed, maintained and secured on the back end.”

    [Jamie Burgess: Partner Technology Advisor for Optimised Desktop, Microsoft UK]

    OK. That’s fine – but that doesn’t mean I need the same operating system and applications on all devices – just access to my data using common formats and appropriate apps. For example, I don’t need Microsoft Office on a device that is primarily used for content consumption – but I do need an app that can access my Microsoft Office data.  Public, private and hybrid clouds should provide the data access – and platform security measures should allow me to protect that data in transit and at rest.  Windows works (sort of) but it’s hardly optimal.

    At this point, I return to Windows Touch – even Microsoft acknowledges the fact that the Windows UI does not work with fat fingers (try touching the close button in the top-right corner of the screen) and some device manufacturers have had to offer both stylus and touch input (potentially useful) with their own skin on top of Windows. Microsoft won’t tell me what’s coming on Windows 8 but they do have a Windows Product Scout microsite that’s designed to help people find applications for their PC – including Apps for Slate PCs on the “featured this week” list. That’s a step towards matching apps with devices but it doesn’t answer the enterprise application store question – for that I think we will have to wait for Windows “vNext”. For 2011 at least, the message is that App-V can be used to deploy an application to Windows PCs and slates alike and to update it centrally (which is fine, if I have the necessary licensing arrangements to obtain App-V).

    Hidden costs? And are we really in the post-PC era?

    Looking at costs, I’ll start with the device and the only Windows slate I’ve heard pricing for is around £700-800. That’s slightly more than a comparable iPad but with also some features that help secure the device for use with enterprise data (fingerprint reader, TPM chip, solid state encrypted disk, etc.).

    Whilst there is undoubtedly a question to answer about supporting non-Microsoft devices too, the benefits of using a Windows slate hinge on it being a viable PC replacement.  I’m not sure that really is the case.

    I still need to license the same Windows applications (and security software, and management agents) that I use in the rest of the enterprise. I’ll admit that most enterprises already have Active Directory and systems management tools that are geared up to supporting a Windows device but I’m not convinced that the TCO is lower (most of my support calls are related to apps running on Windows or in a browser).

    An iPad needs a PC (or a Mac!) to sync with via iTunes and the enterprise deployment is a little, how can I put it? Primitive! (in that there are a number of constraints and hoops to jump through.) A BlackBerry Playbook still needs a BlackBerry handset and I’m sure there are constraints with other platforms too. I really don’t believe that the post PC era is here (yet) – for that we’ll need a little more innovation in the mobile device space. For now, that means that slates present additional cost and I’m far more likely to allow a consumer owned and supported device, for certain scenarios with appropriate risk mitigation, than I am to increase my own “desktop” problem.

    In conclusion

    I still believe that Windows Phone 7, with the addition of suitable enterprise controls for management and maintenance, would be a better slate solution. It’s interesting that, rather than playing a game of chicken and egg as Apple has with Jailbreakers, Microsoft worked with the guys who unlocked their platform, presumably to close the holes and secure the operating system. Allowing Windows Phone to run on a wider range of devices (based on a consistent platform specification, as the current smartphones are) would not present the issues of form factor that Windows Mobile and Android suffer from (too many variations of hardware capability) – in fact the best apps for iOS present themselves differently according to whether they are running on an iPhone or an iPad.

    So, is Microsoft dead in the tablet space? Probably not! Do they have a strategy? Quite possibly – what I’ve seen will help them through the period until Windows “vNext” availability, but as they’re not talking about what that platform will offer, it’s difficult to say whether their current strategy makes sense as anything more than a stopgap (although it is certainly intended as an on ramp for Windows “vNext”). It seems to me that the need to protect Windows market share is, yet again, preventing the company from moving forward at the pace it needs to, but the first step to recovery is recognising that there is a problem – and they do at least seem to have taken that on board.

  • How George Lucas might describe the consumerisation of IT

    Last week, I spent a morning at an IDC briefing on the consumerisation of IT.  It was a good session, but I can’t really blog about it as the information was copyright (although I am writing a white paper on the topic, and I’m sure there will be at least one blog post out of that…)

    One of David Bradshaw‘s slides amused me though, and I asked him if I could reproduce the contents here (my anotations are in [ ] ).

    A long time ago in a galaxy far, far away…

    …an evil empire that used an army of star-troopers to impose CRM systems that oppressed the users

    Actually, it was an army of suit-troopers… [called the IT department]

    Then a rebel army invented a CRM system that people actually enjoyed using [let’s say… salesforce.com]

    Maybe it was just the administrators that enjoyed it… but they really, really did like it. Perhaps the users just didn’t hate it…

    How could this possibly get past all the meanies in corporate IT who fiendishly made users work on systems they hated?

    A cunning plan – use the Internet!

    Our rebel heroes deployed it on the web so anyone could try it and buy a subscription!

    They also avoided the corporate meanies! [and eventually the army of suit-troopers had to embrace their decision]

    Does this sound familiar?