Useful Links: May 2011

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

The science of gamification (@mich8elwu at #digitalsurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Gamification.

Gam-if-ic-a-tion.

Based on the number of analysts and “people who work in digital” I’ve seen commenting on the topics this year, “gamification” has to be the buzzword of 2011. So when I saw that Digital Surrey (#digitalsurrey) were running an event on “The Science of Gamification”, I was very interested to make the journey down to Farnham and see what it’s all about.

The speaker was Michael Wu (@mich8elwu) and I was pleased to see that the emphasis was very much on science, rather than the “marketing fluff” that is threatening to hijack the term.  Michael’s slides are embedded below but I’ll elaborate on some of his points in this post.

  • Starting off with the terminology, Michael talked about how people love to play games and hate to work – but by taking gameplay elements and applying them to work, education, or exercise, we can make them more rewarding.
  • Game mechanics are about a system of principles/mechanisms/rules that govern a system of reward with a predictable outcome.
  • The trouble is that people adapt, and game mechanics become less effective – so we look to game dynamics – the temporal evolution and patterns of both the game and the players that make a gamified activity more enjoyable.
  • These game dynamics are created by joining game mechanics (combining ands cascading).
  • Game theory is a branch of mathematics and is nothing to do with gamification!
  • The Fogg Behaviour Modellooks at those factors that influence human behaviour:
    • Motivation – what we want to do.
    • Ability – what we can do.
    • Trigger – what we’re told to do.
  • When all three of these converge, we have action – they key is to increase the motivation and ability, then trigger at an appropriate point. There are many trajectories to reach the trigger (some have motivation but need to develop ability – more often we have some ability but need to develop motivation – but there is always an activation threshold over which we must be driven before the trigger takes effect).
  • Abraham Maslow’s Hierarchy of Needs is an often-quoted piece of research and Michael Wu draws comparisons between Maslow’s deficiency needs (physical, safety, social/belonging and esteem) and game mechanics/dynamics. At the top of the hierarchy is self-actualisation, with many meta-motivators for people to act.
  • Dan Pink’s Drive discusses intrinsic motivators of autonomy, mastery and purpose leading to better performance and personal satisfaction.  The RSA video featuring Dan Pink talking about what motivates us wasn’t used in Michael Wu’s talk, but it’s worthy of inclusion here anyway:

  • In their research, John Watson and BF Skinner looked at how humans learn and are conditioned.  A point system can act as a motivator but the points themselves are not inherently rewarding – their proper use (a reward schedule) is critical.
  • Reward schedules include fixed interval; fixed interval and fixed ratio; variable interval; and variable ratio – each can be applied differently for different types of behaviour (i.e. driving activity towards a deadline; training; re-enforcing established behaviours; and maintaining behaviour).
  • Mihaly Csikszentmihalyi is famous for his theories on flow: an optimal state of intrinsic motivation where one forgets about their physical feelings (e.g. hunger), the passage of time, and ego; balancing skills with the complexity of a challenge.
  • People love control, hate boredom, are aroused by new challenges but get anxious if a task is too difficult (or too easy) and work is necessary to balance challenges with skills to achieve a state of flow. In reality, this is a tricky balance.
  • Having looked at motivation, Michael Wu spoke of the two perspectives of ability: the user perspective of ability (reality) and the task perspective of simplicity (perceptual).
  • To push a “user” beyond their activation threshold there is a hard way (increase ability by motivating them to train and practice) or an easy way (increase the task’s perceived simplicity or the user’s perceived ability).
  • Simplicity relies on resources and simple tasks cannot use resources that we don’t have.  Simplicity is a measure of access to three categories of resource at the time when a task is to be performed: effort (physical or mental); scarce resources (time, money, authority/permission, attention) and adaptability (capacity to break norms such as personal routines, social, behavioural or cultural norms).
  • Simplicity is dependant upon the access that individuals have to resources as well as time and context – i.e. resources can become inaccessible (e.g. if someone is busy doing something else). Resources are traded off to achieve simplicity (motivation and ability can also be traded).
  • A task is perceived to be simple if it can be completed it with fewer resources than we expect (i.e. we expect it to be harder) and some game mechanics are designed to simplify tasks.
  • Triggers are prompts that tell a user to carry out the target behaviour right now. The user must be aware of the trigger and understand what it means. Triggers are necessary because we may not be aware of our abilities, may be hesitant (questioning motivation) or may be distracted (engaged in another activity).
  • Different types of triggers may be used depending on behaviour. For example, a spark trigger is built in to the motivational mechanism; a facilitator highlights simplicity or progress; and signals are used as reminders when there is sufficient motivation and no requirement to simplify as task.
  • Triggers are all about timing, and Richard Bartle‘s personality types show which are the most effective triggers. Killers are highly competitive and need to be challenged; socialisers are triggered by seeing something that their friends are doing; achievers may be sparked by a status increase; and explorers are triggered by calls on their unique skills, without any time pressure. Examples of poorly timed triggers include pop-up adverts and spam email.
  • So gamification is about design to drive action: providing feedback (positive, or less effectively, negative); increasing true or perceived ability; and placing triggers in the behavioural trajectory of motivated players where they feel able to react.
  • If the desired behaviour is not performed, we need to check: are they triggered? Do they have the ability (is the action simple enough)? Are they motivated?
  • There is a moral hazard to avoid though – what happens if points (rather than desired behaviour) become the motivator and then the points/perks are withdrawn?  A case study of this is Gap’s attempt to gamify store check-ins on Facebook Places with a free jeans giveaway. Once the reward had gone, people stopped checking in.
  • More effective was a Fun Theory experiment to reduce road speeds by associating it with a lottery (in conjunction with Volkswagen). People driving below the speed limit were photographed and entered into a lottery to win money from those who were caught speeding (and fined).

  • Michael Wu warns that gamification shouldn’t be taken too literally though: in another example, a company tried incentivising sales executives to record leads with an iPad/iPhone golf game. They thought it would be fun, and therefore motivational but it actually reduced the ability to perform (playing a game to record a lead) and there was no true convergence of the three factors to influence behaviour.
  • In summary:
    • Gamification is about driving players above the activation threshold by motivating them (with positive feedback), increasing their ability (or perceived ability) and then applying the roper trigger at the right time.
    • The temporal convergence of motivation, ability and trigger is why gamification is able to manipulate human behaviour.
    • There are moral hazards to avoid (good games must adapt and evolve with players to bring them into a state of flow).

I really enjoyed my evening at Digital Surrey – I met some great people and Michael Wu’s talk was fascinating. And then, just to prove that this really is a hot topic, The Fantastic Tavern (#tftlondon) announced today that their next meeting will also be taking a look at gamification

Further reading/information

[Update 23:12 – added further links in the text]

 

Run Fatboy Run!

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

BUPA London 10,000
Was it just a co-incidence that the Simon Pegg film, “Run Fatboy Run” was shown on British TV this weekend? I think not! (great film by the way).

For those who’ve missed my last “Fit At 40” update – yesterday was the Bupa London 10,000 – and I lined up with several thousand other competitors (including my friend Eileen Brown) to run the course from St James’ Park to the City and back.  This was my first 10K and I’d been training with 5 mile (8km) runs at around 01:06:00, so I was hoping to come in at around 01:20:00 but, to be honest, a finish was what I was really after!

Not only was this my first 10K, but it was my first big race too – and, with the race starting in nine groups (with me in the ninth), the elite runners had completed the race before I was over the start line!

The first kilometre flashed by – I set off way too fast but I was in clear space towards the front of my group and wanted to stay out front so my supporters at the end of Horse Guards’ Parade could grab a decent picture. After that, I tried to settle down into a comfortable pace but my pre-race efforts to make sure I was sufficiently hydrated set me back a bit (and I wasn’t going to “do a Paula” [Radcliffe]).  I couldn’t see the toilets that were supposed to be at 3km so I kept going until I saw a McDonalds at just after 4km – and rushing off the course, down the stairs into the basement and back out again must have cost at least 3 minutes (I wasn’t the only one to do this either!).  Only then could I settle into my planned pace of 12 minutes run, 2 walk at around 12 minutes per mile (I’m not fast!) but the point is I did it!

I have to say that the whole event was incredibly well organised (by the same people who organise the London Marathon), the weather was kind to us, and even though the water station at 7km had run dry by the time I passed, some wonderful people were handing out cups of water just before, at around at about 6.5km (thank you guys).  Kudos too, to the guys who were clearing up the thousands of plastic bottles etc. on the course as well as to the marshalls that lined the whole route.

So, how did I do? Well, my (unoffocial) stats (from RunKeeper) show me at just over 1 hour 30 minutes and the official time was 01:30:04

Even though I’m a bit disappointed with my time, I’m absolutely stoked with the achievement. Not that long ago, I couldn’t run to the end of the street. Now it’s time to push on and lose some more weight, then run another 10K later this year (hopefully at closer to 01:20:00).

The challenge continues…

Rating images as part of a digital workflow

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I was at my local camera club meeting and fellow geek Haydn Langley was presenting on digital asset management (together with a few technical tips and tricks for photo editing).

One particular “lightbulb moment” was when Haydn put up an example of rating images (e.g. in Adobe Bridge or Lightroom). I’ve never been able to get my head around this because, over time, my idea of what makes a good image changes (as I, hopefully, get better at it). Now I think the key is to adopt a system that works and stick with it – any rating will be subjective but at least you’ll be consistent.

Haydn suggested rating images using a system similar to this (based on a system proposed by Mark Sabatella):

  • 1 star: images that are of no real quality (e.g. out of focus) and won’t be missed if deleted.
  • 2 stars: acceptable image but not one of the best. Nothing special, but could be kept as a record shot.
  • 3 stars: one of the better images of a subject. A “keeper”.
  • 4 stars: one of the best images: worth sharing with others (e.g. on Flickr), or printing large.
  • 5 stars: comparable to pros that we admire. Worthy of competition entry.

I might adapt this slightly, but it’s a good starting point.

He also suggested using colours to indicate where a particular image has reached in the workflow, for example:

  • Red: stage 1 – ingested (i.e. imported, renamed, etc.)
  • Yellow: stage 2 – global metadata assigned (copyright, etc.)
  • Green: stage 3 – survived initial cull
  • Blue: stage 4 – post-processed, survived final cull
  • Purple: stage 5- image-specific metadata assigned (keyword tags for subject/content/style, etc.)

As I’m approaching 10000 shots since I bought my D700 2 years ago, I’ve got some work to do to “sort everything out” but this system should help a lot.  I guess I should also think about renaming images on import rather than storing them as _DSCnnnn.NEF!

Some thoughts on modern communications and avoiding the “time-Hoover”

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I was reading an article by Shelley Portet looking at how poor productivity and rudeness are influenced by technology interruptions at work. As someone who’s just managed to escape from email jail yet again (actually, I’m on parole – my inbox may finally be at zero but I still have hundreds of items requiring action) I have to say that, despite all the best intentions, experience shows that I’m a repeat offender, an habitual email mis-manager – and email is just the tip of the proverbial iceberg.

Nowadays email is just one of many forms of communication: there’s instant messaging; “web 2.0″ features on intranet sites (blogs, wikis, discussion forums); our internal social networking platform; business and personal use of external social networks (Twitter, LinkedIn, Slideshare, YouTube, Facebook… the list goes on) – so how can we prepare our knowledge workers for dealing with this barrage of interruptions?

There are various schools of thought on email management and I find that Merlin Mann’s Inbox Zero principles work well (see this video from Merlin Mann’s Google Tech Talk using these slides on action-based email – or flick through the Michael Reynolds version for the highlights), as long as one always manages to process to zero (that’s the tricky part that lands me back in email jail).

The trouble is that Inbox Zero only deals with the manifestation of the problem, not the root cause: people. Why do we send these messages? And how do we act on them?

A couple of colleagues have suggested recently that the trouble with email is that people confuse sending an email with completing an action as if, somehow, the departure of the message from someone’s outbox on its way to someone else’s inbox implies a transfer of responsibility. Except that it doesn’t – there are many demands on my colleagues’ time and it’s up to me to ensure that we’re all working towards a common goal. I can help by making my expectations clear; I can think carefully before carbon copying or replying to all; I can make sure I’m brief and to the point (but not ambiguous) – but those are all items of email etiquette. They don’t actually help to reduce the volumes of messages sent and received. Incidentally, I’m using email as an example here – many of the issues are common whatever the communications medium (back to handwritten letters and typed memos as well as forwards to social networking) but, ultimately I’m either trying to:

  • Inform someone that something has happened, will soon happen, or otherwise communicate something on a need to know basis.
  • Request that someone takes an action on something.
  • Confirm something that has been agreed via another medium (perhaps a telephone call), often for audit purposes.

I propose two courses of action, both of which involve setting the expectations of others:

  1. The first is to stop thinking of every message as requiring a response. Within my team at work, we have some unwritten rules that: gratefulness is implied within the team (not to fill each others’ inboxes with messages that say “thank you”); carbon copy means “for information”; and single-line e-mails can be dealt with in the subject heading.
  2. The second can be applied far more widely and that’s the concept of “service level agreements” for corporate communications. I don’t mean literally, of course, but regaining productivity has to be about controlling the interruptions. I suggest closing Outlook. Think of it as an email/calendar client – not the place in which to spend one’s day – and the “toast” that pops up each time a message arrives is a distraction. Even having the application open is a distraction. Dip in 3 times a day, 5 times a day, every hour, or however often is appropriate but emails should not require nor expect an immediate response. Then there’s instant messaging: the name “instant” suggests the response time but presence is a valuable indicator – if my presence is “busy”, then I probably am. Try to contact me if you like but don’t be surprised if I ignore it until a better time. Finally, social networking: which is both a great aid to influencing others and to keeping abreast of developments but can also be what my wife would call a “time-Hoover” – so don’t even think that you can read every message – just dip in from time to time and join the conversation, then leave again.

Ultimately, neither of these proposals will be successful without cultural change. This issue is not unique to any one company but the only way I can think of to change the actions and/or thoughts of others is to lead by example… starting today, I think I might give them a try.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

A £5 homebrew Kinect sensor mount for a flat-screen TV

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago (yes, it’s taken me that long to get around to posting it – sorry Stuart), my colleague Stuart Whatmore sent me some information he thought might make a good blog post.  Like me, he purchased a Microsoft Kinect sensor – but he had issues with it operating at a low level (i.e. below his television).

I have an “old-skool” CRT TV (I still reckon that my 13-year-old Trinitron will outlast most LCD panels I could buy today), so my Kinect sensor sits happily on the top of the telly, but Stuart has a handy tip for those who would like to mount theirs on a newer TV.

At the time, there was no official bracket available in the UK (I looked this week and Play.com lists a variety of options but they don’t come cheap – the official TV mount kit is £36.99) but Stuart’s solution uses an upturned bookend from Staples (which cost around £5), a bolt attached to the wall-mounting bracket, and some sticky pads (two under the bracket and two under the Kinect sensor).

Stuart Whatmore's Kinect bracket

Stuart notes that “The bolt on the picture needs reducing in size and the only caution is to ensure you do not screw through to the TV screen”! It sounds like a workable solution to me – but I take no responsibility for anyone else’s actions as a result of reading this blog post.  Personally, I’m tempted to try the official wall mount kit.

Global Corporate Challenge 2011 (#2011gcc)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Global Corporate ChallengeToday marks the start of the 2011 Global Corporate Challenge (GCC) and I’m really pleased to be a member of one the Fujitsu UK and Ireland teams that are taking part. The GCC is the world’s largest and most exciting corporate health initiative, in the form of a pedometer based walking challenge where employees engage in a virtual walk around the world for 16 weeks. I already walk from London Euston station to Baker Street (and back) on the days that I’m in the office but the GCC should help me get a little more active and shed some more pounds (which will help my Fit at 40 challenge), whilst competing against other teams (and hoping to beat our colleagues down under).

Based on an average of 12,000 steps per day, Fujistu UK and Ireland’s 15 teams should be able to acheive a total of 139,860,000 steps (89,510 km/55,619 miles) and:

  • Avoid 241.4 days of absenteeism.
  • Lose 525kg/1,157lbs in weight.
  • Increase our ability to handle stress by 40%.
  • Increase our quality of sleep by 40%.
  • Achieve an increase in overall health and wellbeing of 40%.
  • Including a 42% increase in energy.
  • Increasing productivity by 40%.

We’ll also avoid carbon emissions of 3.36 tonnes and the corporate sponsorship means that we’ll support some worthy causes too.

The Global Corporate Challenge runs until 6 September.

Best of Microsoft Management Summit 2011 (#mmsuk2011)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks ago, I spent a day at Microsoft’s Best of MMS 2011 event in London – reacquainting myself with the latest developments in System Center. It was a pretty full day (and a pretty full venue – Microsoft’s London offices are far from ideal for this type of event, especially when the foyer is filled with partner booths) and there were plenty of demonstrations of product features and advantages (although, in true software vendor style, not too much focus on business benefits).

This post brings together my notes from the event, picking up the highlights from the keynote, supplemented with a few more from the individual product sessions:

  • Consumerisation is not just about devices but also management and security.
  • System Center Configuration Manager (SCCM) 2012 is about empowering users – no longer device centric but user centric – application delivery is context sensitive to the device that the user is using at that time. SCCM 2012 includes mobile device management: managing settings and policies for any device that can use Exchange ActiveSync
  • Forefront Endpoint Protection (FEP) is now using SCCM – so it’s no longer necessary to have separate infrastructures for management and security – also FEP is now part of the core CAL (as is the Lync standard CAL). New 2012 release of FEP will run on SCCM 2012 (currently it runs on SCCM 2007 R2/3).
  • Windows Intune is a cloud-based solution for light management/unmanaged PCs (no on premise infrastructure required). It includes software assurance for Windows Enterprise (so users can stay on the latest Windows release).
  • There are various marketing pitches about the cloud – but it’s really a model for computing and not a place/destination. Cloud attributes include self-service, shared (there may be some logical partitioning), scalable/elastic, usage-based chargeback.
  • IT as a service includes: IaaS (addition of infrastructure resilience); PaaS (not worried about virtual machines); SaaS (consuming an application directly from a vendor).
  • Microsoft’s own datacentre infrastructure is based on extreme standardisation; business alignment (service-specific characteristics); SLA-driven architecture; and process maturity (re-imagined processes – not just automating today’s processes but thinking about the most efficient process for tomorrow, automation, change control).
  • Private cloud is a combination of virtualisation and management – adopting public cloud practices internally… it’s not just about virtual machines and other infrastructure – it’s a full stack of management capabilities.
  • The Microsoft stack is optimised for Microsoft software but there are also some cross-platform capabilities in System Center Operations Manager (SCOM) and in System Center Virtual Machine Manager (SCVMM).
  • Cloud services (public or private) are based on a provider-consumer relationship. A typical service provider role might be a data centre administrator, whose concerns would be fabric assembly(storage/network/compute), delegation and control, flexibility and elasticity, and cost efficiency. A consumer example is an application owner, who is looking for empowerment and agility, a self-service experience, application visibility and control, and simplicity.
  • System Center codename Concero is a new (web-based) product in development for (cloud) application owners, providing a view of all public and private clouds (Windows Azure subscriptions and on premise infrastructure, not just Hyper-V but ESX and Xen too). Pick a template and build out components (different tiers) for services within existing clouds. Configure the attributes that an application owner has to manage. Not just virtual machines but other data centre hardware (load balancers, etc.) too, using SCVMM in the background to deploy.
  • Request a new cloud using a catalogue from System Center Service Manager (SCSM).
  • Delivering a private cloud is about creating logical and standardised structures (because there is lots of legacy to manage there will always be a diverse infrastructure) and delegating portions to business functions.
  • SCVMM 2012 supports creation of delegated private cloud infrastructure – create a logical cloud by defining attributes such as number of virtual machines, hypervisor choice, available service templates, and what can be done with these resources.
  • Applications need to be abstracted from infrastructure (externalised configurations).
  • Business empowerment is not about virtual machines though (SLA management – and self service too) – SCOM 2012 and Avicode (recent acquisition) give application insight to create dashboards for cloud applications and and drill down into alerts. These dashboards may be made available to managers via SharePoint web parts. SCOM 2012 also includes network monitoring.
  • System Center Orchestrator (SCO) is the new name for Opalis (process automation tool) providing run books automating operational processes.

System Center Roadmap 2011

Some SCVMM highlights:

  • SCVMM is now about far more than just virtual machines (I wonder when it will be renamed – perhaps System Center Fabric Manager?). Enhancements include:
    • Infrastructure (high availability/cluster aware, easier upgrade path, custom properties with name/value pairs, fully scriptable via PowerShell).
    • Fabric management (bare metal provisioning of Hyper-V using Windows Deployment Services and host profiles, multiple hypervisor support – Hyper-V/ESX/Xen, network management and logical modelling, storage management using standards such as SMIS, update management, dynamic optimisation, power management/smart shutdown – integrated with baseboard management controllers, cluster management).
    • Cloud management (application owner usage, capacity and capability, delegation and quota).
    • Service management (service templates, application deployment, custom command execution, image based servicing).
  • SCVMM works with SCOM for load balancing (uses a connector and rebalances when limits hit which is a reactive approach) – in 2012 it also allows proactive load balancing (dynamic optimisation). This can also be used to schedule host power-downs.
  • Self-service portal is integrated in SCVMM 2012. Console is now context-aware so it can be used by all user roles and they only see delegated resources.
  • Server App-V is part of SCVMM 2012 – separating app state from the operating system, to enable image-based servicing and slide in a new operating system instead of traditional operating system updates. It is intended for line of business apps, not SQL, Exchange, SharePoint, etc.
  • Service designer to create 3 tier applications and template them within the VMM library. Define deployment order and how to scale out. Scale out a virtual machine tier via a right click (within the service definition) – Microsoft also plans to deliver a management pack to detect service performance from SCOM and scale accordingly.
  • Roles and features now part of operating system configuration in VM templates, as are application configuration items – not just virtualised but also with scripts. Deploy service and will be intelligently placed.
  • Still support a server-based approach but trying to bring a service-based approach to deploying and managing apps in DC. This is also represented in SCOM.

More on SCO:

  • In the private cloud change happens all the time but it’s the same change each time – so management is not about approval but logging. We can remove the manual but do need the ability to chose (to cope with diversity, move at different speeds).
  • Three step process:
    • Integrate - take things (like disparate System Center products) and reference them as single entity.
    • Orchestrate - make them work together.
    • Automate - make things happen automatically
  • If we jump straight to automation, we haven’t re-imagined the process. That means that if we take a bad process and automate it, we get a fast bad process! And if that breaks things, it really breaks them!
  • SCO (Opalis) concepts include:
    • Activities – intelligent tasks with defined actions.
    • Integration packs – extendable connectors to communicate with other solutions (outbound – SCO has an application integration engine in a web service form for inbound communications)
    • Databus – publish and consume mechanism (when something happens, capture information, put it on the bus, send along as it works through the runbook).
    • Runbooks – system level workflows that execute a series of linked activities to complete a defined set of actions.
  • SCO behaves in the same way as Opalis 6.3 with some minor UI changes and some investments in functionality but no fundamental changes in the way the product works (although it will be available in additional languages). It is a 64-bit only product.
  • SCSM also has an orchestration engine that is not based on Opalis – this will remain as a separate but complementary product.
  • Some integration packs have been remediated for SCO and will be available out of the box but not all – the packs remaining are not tied to service packs, etc. and will be released out of band.

The next session was, frankly, dull, droning on about SCSM and “GRC”, and I missed the presenter introducing the term (which I now know is governance risk and compliance – it was on the title slide of the deck but there was no definition). I have no notes to share as I struggled to keep up from the start…

Moving on to System Center Data Protection Manager (SCDPM):

  • SCDPM 2006 provided centralised file-based backup (removing tapes from branches).
  • DPM 2007 included Volume Shadow Copy Service (VSS) application support for Exchange, SQL Server and SharePoint.
  • DPM 2010 included more enterprise features and client support. It still requires Active Directory but can now backup standalone machines off domain. Supported applications include Exchange, SQL Server, SharePoint, Dynamics, Virtual Server/Hyper-V, Windows Servers and Clients – and it can be used to backup SCDPM too. It can also backup highly available configurations. 
  • SCDPM can create backups every 15 minutes with one full backup, then block-level differentials. Online snapshots for disk-based recovery and tape-based backups. Initial backup can be immediate, scheduled or via removable media.
  • Many application owners use SCDPM and consider it as an extension to their application, rather than as a backup tool.
  • SCDPM 2012 introduces a centralised management  for up to 100 SCDPM 2010/2012 servers or 50,000 sources; role based management; push to resume backups; SLA-based reporting (don’t alert every failure, just those that matter); consolidated alerts (fix one problem, not 20 alerts) and extensibility via PowerShell to script known issues.
  • The console uses SCOM (it is a management pack and a few binaries) and may be integrated with a ticketing system (either SCSM or third parties products such as HP Openview via connectors). Roles are taken from SCOM (either create new roles or use existing ones).
  • Infrastructure enhancements include certificate-based authentication (where there is no NTLM trust in place) and smarter media co-location (choose specific data sources to share a tape).
  • Workload enhancements include SharePoint item level recovery, Hyper-V item level recovery (even when SCDPM is inside a virtual machine), and generic data source protection offering basic protection/recovery support for any referential data source with full application backup (full, delta and consistency check), original item recovery and restore as files to a network location, and XML support for applications without a VSS writer.
  • There is no native protection for non-Windows applications, but virtual machines with other operating systems (e.g. Linux) can be backed up – the key is VSS support.

Some of the key points I picked up from the SCCM presentation:

  • SCCM 2012 is less focused on packages and advertisements, now about applications, not scripts.
  • User-centric approach, with better support for virtual environments.
  • New models for communication between components.  Improved infrastructure architecture using SQL replication.
  • Now includes mobile device management capabilities that were previously in System Center Mobile Device Manager as well as support for “light” management of mobile devices via Exchange ActiveSync.

Finally, SCOM 2012:

  • Features simplified disaster recovery (for the SCOM servers) and monitoring improvements (so a device is monitored by a pool, not a single server).
  • Support for monitoring Linux machines was introduced in SCOM 2007 R2, 2012 includes network monitoring and application monitoring (.NET and Java, when running on Windows).
  • Network and application monitoring is not intended to be all-encompassing, but provides information to take to specialist teams and at least have an idea that there is an issue – more than just a gut feeling that the network or the application is “broken”.
  • Introduction of dashboard templates – web and console views – can publish link with others. Can also create more complex dashboards that can be integrated with SharePoint, customise data visualisations via widgets. Dashboards and widgets are delivered via management packs.
  • SCOM 2007 moved from server to service monitoring – 2012 is taking the next step.

As I look back on the day’s event I do have to congratulate the System Center product group on their openness (talking about future products in a way that helps customers and partners to plan ahead) and for running a free of charge event like this which is a great way for me to get the information I need, without the significant investment of time and money that conference attendance entails. Now, if only the Windows client and server teams would do something similar…

Slidedecks and recorded sessions from the Best of Microsoft Management Summit 2011 are available on the Microsoft website.

An alternative enterprise desktop

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I participated in an online event that looked at the use of Linux (specifically Ubuntu) as an alternative to Microsoft Windows on the enterprise desktop.

It seems that every year is touted as the year of Linux on the desktop – so why hasn’t it happened yet? Or maybe 2011 really is the year of Linux on the desktop and we’ll all be using Google Chrome OS soon. Somehow I don’t think so.

You see, the trouble with any of the “operating system wars” arguments is that they miss the point entirely. There is a trilogy of people, process and technology at stake – and the operating system is just one small part of one of those elements. It’s the same when people start to compare desktop delivery methods – thick, thin, virtualised, whatever – it’s how you manage the desktop it that counts.

From an end user perspective, many users don’t really care whether their PC runs Windows, Linux, or whatever-the-next-great-thing-is. What they require (and what the business requires – because salaries are expensive) is a system that is “usable”. Usability is in itself a subjective term, but that generally includes a large degree of familiarity – familiarity with the systems that they use outside work. Just look at the resistance to major user interface changes like the Microsoft Office “ribbon” – now think what happens when you change everything that users know about using a PC. End users also want something that works with everything else they use (i.e. an integrated experience, rather than jumping between disparate systems). And, for those who are motivated by the technology, they don’t want to feel that there is a two tier system whereby some people get a fully-featured desktop experience and others get an old, cascaded PC, with a “light” operating system on it.

From an IT management standpoint, we want to reduce costs. Not just hardware and software costs but the costs of support (people, process and technology). A “free” desktop operating system is just a very small part of the mix; supporting old hardware gets expensive; and the people costs associated with major infrastructure deployments (whether that’s a virtual desktop or a change of operating system) can be huge. Then there’s application compatibility – probably the most significant headache in any transformation. Yes, there is room for a solution that is “fit for purpose” and that may not be the same solution for everyone – but it does still need to be manageable – and it needs to meet all of the organisation’s requirements from a governance, risk and compliance perspective.

Even so, the days of allocating a Windows PC to everyone in an effort to standardise every single desktop device are starting to draw to a close. IT consumerisation is bringing new pressures to the enterprise – not just new device classes but also a proliferation of operating system environments. Cloud services (for example consuming software as a service) are a potential enabler – helping to get over the hurdles of application compatibility by boiling everything down to the lowest common denominator (a browser). The cloud is undoubtably here to stay and will certainly evolve but even SaaS is not as simple as it sounds with multiple browser choices, extensions, plug-ins, etc. If seems that, time and time again, it’s the same old legacy applications (generally specified by business IT functions, not corporate IT) that make life difficult and prevent the CIO from achieving the utopia that they seek.

2011 won’t be the year of Linux on the desktop – but it might just be the year when we stopped worrying about standardisation so much; the year when we accepted that one size might not fit all; and the year when we finally started to think about applications and data, rather than devices and operating systems.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Migrating mail and contacts from Google Mail to Office 365

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I started to write this post in September 2008, when it was titled “Importing legacy e-mail to my Google Apps account”. A few years on and I’m in the process of dumping Google Apps in favour of Microsoft Office 365 (if only Microsoft had a suitable offering back then, I might not have had to go through this) but I never did manage to bring all my mail across from my old Exchange server onto Google.

The draft blog post that I was writing back then (which was never published) said:

First of all, I backed up my Exchange mailbox. I also backed up my Google Mail and, whilst there are various methods of doing this, I used Outlook again, this time to connect to my Google Apps account via IMAP and take a copy into a PST file. With everything safe in a format I knew how to access if necessary, I set about the import, using the Google Email Uploader to import my Outlook mailbox into my Google Apps account.

Rather than import into my live mailbox straightaway, I brought the legacy messages into a new mailbox to test the process. Then I did the same again into my real e-mail

With many years worth of e-mail to import (29097 messages), this took a long while to run (about 20 hours) but gradually I saw all the old messages appear in Google Mail, with labels used to represent the original folder names. There were a few false starts but, thankfully the Google Email Uploader picks up where it left off. It also encountered a few errors along the way but these were all messages with suspect attachments or with malformed e-mail addresses and could be safely disregarded (I still have the .PST as a backup anyway) .

Except that I never did get my mailbox to a state where I was happy it was completely migrated. Uploading mail to GMail was just too flaky; too many timeouts; not enough confidence in the integrity of my data.

In the end, I committed email suicide and started again in Google Apps Mail. I could do the same now in Office 365 but this time I don’t have an archive of the messages in a nice handy format (.PST files do have their advantages!) and anyway, I wanted to be able to bring all of my mail and contacts across into my nice new 25GB mailbox (I’ve written previously about synchronising my various calendars – although I should probably revisit that too some time)

Migrating contacts

Migrating my contacts was pretty straightforward. I’m using a small business subscription on Office 365 so I don’t have the option of Active Directory integration – the enterprise plans have this but I exported my contacts from GMail in .CSV format and brought them back into Office 365 through the Outlook web app.

One thing that’s worth noting – Google Mail has an option to merge duplicate contacts – I used this before I exported, just to keep things clean (e.g. I had some contacts with just a phone number and others with an e-mail address – now they are combined).

Migrating mail

Microsoft seems to have thought the mail migration process for Office 365 though pretty thoroughly. I don’t have space to go into the full details here, but it supports migrations from Exchange 2007 and later (using autodiscover), Exchange 2003 (manually specified settings) and IMAP servers. After a migration batch is initiated, Office 365 attempts to take a copy of the mailbox and then synchronise any changes every 24 hours until the administrator marks the migration as complete (during the migration period they should also switch over the DNS MX records for the DNS domain).

For GMail, IMAP migration is the option that’s required, together with the following settings:

Setting Value
IMAP Server imap.gmail.com
Authentication Basic
Encryption SSL
Port 993

(I only had one mailbox to migrate.)

Because GMail uses labels instead of Folders, I excluded a number of “folders” in the migration too to avoid duplicates. Unfortunately, this didn’t seem to take any effect (I guess I can always delete the offending folders from the imported data, which is all in a subfolder of [Google Mail]).

Finally, I provided a CSV file with the email address, username and password for each mailbox that I wanted to migrate.

Unfortunately, I’ve had a few migration failures – and the reports seem to suggest connectivity issues with Google (the Migration Error report includes messages like “Data migration for this mailbox failed because of delays on the remote server.” and “E-Mail Migration failed for this user because no e-mail could be downloaded for 1 hours, 20 minutes.“. Thankfully, I was able to restart the migration each time.

Monitoring the migration

Monitoring the migration is pretty straightforward as the Exchange Online portion of Office 365 gives details of current migrations. It’s also possible to control the migration from the command line. I didn’t attempt this, but I did use two commands to test connectivity and to monitor progress:

Test-MigrationServerAvailability -imap -remoteserver imap.gmail.com -port 993

and:

Get-MigrationStatus

Details of how to connect to Office365 using PowerShell can be found in my post about changing the primary email address for Office 365 users.

Points of note

I found that, whilst a migration batch was in process, I needed to wait for that batch to finish before I could load another batch of mailboxes. Also, once a particular type of migration (Exchange 2003, 2007 or IMAP) has been started, it’s not possible to creat batches of another type until the migration has been completed. Finally, completing a migration can take some time (including clean up) before it’s possible to start a new migration.

Wrap-up

It’s worth noting that Office 365 is still in beta and that any of this information could change. 24 hours seems a long while to wait between mailbox synchronisations (it would be good if this was customisable) but the most significant concern for me is the timeouts on mailbox migrations. I can rule out any local connectivity issues as I’m migrating between two cloud services (Google Apps Mail and Office 365) – but I had enough issues on my (single mailbox) migration to concern me – I wouldn’t want to be migrating hundreds of mailboxes this way. Perhaps we’ll see third party tools (e.g. from Quest Software) to assist in the migration, comparing mailboxes to see that all data has indeed been transferred.