Useful Links: July 2011

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

  • Topsy – Claims to offer real-time search for the social web – certainly seems good for historical Twitter searches (via Jack Schofield)
  • Future Perfect – Grammar tips to improve written communications

Cloud adoption “notes from the field”: people, politics and price

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written about a few of the talks from the recent Unvirtual unconference, including Tim Moreton’s look at big data in the cloud; and Justin McCormack’s ideas for re-architecting cloud applications. I’ve also written previously about a previous Joe Baguley talk on why the consumerisation of IT is nothing to do with iPads.  The last lightning talk that I want to plagiarise (actually, I did ask all of the authors before writing about their talks!) is Simon Gallagher (@vinf_net)’s “notes from the field” talk on his experiences of implementing cloud infrastructure.

Understanding cloud isn’t about Amazon or Google

There is a lot happening in the private cloud space and hybrid clouds are a useful model too because not everybody is a web 2.0 start-up with a green-field approach and enterprises still want some of the capabilities offered by cloud technologies.

Simon suggests that private cloud is really just traditional infrastructure, with added automation/chargeback… for now. He sees technology from the public cloud filtering down to the private space (and hybrid is the reality for the short/medium term for any sizeable organisation with a legacy application set).

The real cloud wins for the enterprise are self-service, automation and chargeback, not burst/flex models.

There are three types of people that want cloud… and the one who doesn’t

Simon’s three types are:

  1. The boss from the Dilbert cartoons who has read a few too many analyst reports (…and is it done yet?)
  2. Smart techies who see cloud as a springboard to the nest stage of their career (something new and interesting)
  3. Those who want to make an inherited mess somebody else’s problem

There is another type of person who doesn’t welcome cloud computing – people whose jobs become commoditised.  I’ve been there – most of us have – but commodisiation is a fact of life and it’s important to act like the smart guys in the paragraph above, embracing change, learning and developing new skills, rather than viewing the end of the world as nigh.

Then there are the politics

The first way to cast doubt on a cloud project is to tell everyone it’s insecure, right?

But:

  • We trust our WAN provider’s MPLS cloud.
  • We trust our mail malware gateways (e.g. MessageLabs).
  • We trust our managed service provders staff.
  • We trust the third party tape collection services.
  • We trust out VPNs over the Internet.
  • We may already share datacentres with competitors.

We trust these services because we have technical and audit controls… the same goes for cloud services.

So, I just buy cloud products and start using them, yeah?

Cloud infrastructure is not about boxed products.  There is no “one-size fits all” Next, Next, Next, Finish wizard but there are complex issues of people, process, technology, integration and operations.

It’s about applications, not infrastructure

Applications will evolve to leverage PaaS models and next-generation cloud architectures. Legacy applications will remain legacy – they can be contained by the cloud but not improved. Simple provisioning needs automation, coding, APIs. Meanwhile, we can allow self-service but it’s important to maintain control (we need to make sure that services are de-provisioned too).

Amazon is really inexpensive – and you want how much?

If you think you can build a private cloud (or get someone else to build you a bespoke cloud) for the prices charged by Amazon et al, you’re in for a shock. Off the shelf means economised of scale and, conversely, bespoke does not come cheap. Ultimately, large cloud providers diversify their risks (not everyone is using the service fully at any one time) and somebody is paying.

Opexification

There’s a lot of talk about the move from capital expenditure to operating expenditure (OpEx-ification) but accounts don’t like variable costs. And cloud pricing is a rate card – it’s certainly not open book!

Meanwhile, the sales model is based on purchase of commercial software (enterprises still don’t use open source extensively) and, whilst the public cloud implies ability to flex up/down, private cloud can’t do this (you can’t take your servers back and say “we don’t need them this month”). It’s the same for software with sales teams concentrating on license sales, rather than subscriptions.

In summary

Simon wrapped up by highlighting that, whilst the public cloud has its advantages, private and hybrid clouds are big opportunities today.

Successful implementation relies on:

  • Motivated people
  • A pragmatic approach to politics
  • Understand what you want (and how much you can pay for it)

Above all, Simon’s conclusion was that your mileage my vary!

Can we process “Big Data” in the cloud?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I wrote last week about one of the presentations I saw at the recent Unvirtual conference and this post highlights another one of the lightning talks – this time on a subject that was truly new to me: Big Data.

Tim Moreton (@timmoreton), from Acunu, spoke about using big data in the cloud: making it “elastic and sticky” and I’m going to try and get the key points across in this post. Let’s hope I get it right!

Essentially, “big data” is about collecting, analysing and servicing massive volumes of data.  As the Internet of things becomes a reality, we’ll hear more and more about big data (being generated by all those sensors) but Tim made the point that it often arrives suddenly: all of a sudden you have a lot of users, generating a lot of data.

Tim explained that key ingredients for managing big data are storage and compute resources but it’s actually about more than that: it’s not just any storage or compute resource because we need high scalability, high performance, and low unit costs.

Compute needs to be elastic so that we can fire up (virtual) cloud instances at will to provide additional resources for the underlying platform (e.g. Hadoop). Spot pricing, such as that provided by Amazon, allows a maximum price to be set, to process the data at times when there is surplus capacity.

The trouble with big data and the cloud is virtualisation. Virtualisation is about splitting units of hardware to increase utilisation, with some overhead incurred (generally CPU or IO) – essentially multiple compute resources are combined/consolidated.  Processing big data necessitates combining machines for massive parallelisation – and that doesn’t sit too well with cloud computing: at least I’m not aware of too many non-virtualised elastic clouds!

Then, there’s the fact that data is decidedly sticky.  It’s fairly simple to change compute providers but how do you pull large data sets out of one cloud and into another? Amazon’s import/export involves shipping disks in the post!

Tim concluded by saying that there is a balance to be struck.  Cloud computing and big data are not mutually exclusive but it is necessary to account for the costs of storing, processing and moving the data.  His advice was to consider the value (and the lock-in) associated with historical data, to process data close to its source, and to look for solutions that a built to span multiple datacentres.

[Update: for more information on “Big Data”, see Acunu’s Big Data Insights microsite]

IT and the law – is misalignment an inevitable consequence of progress?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday evening, I had the pleasure of presenting on behalf of the Office of the CTO to the Society for Computers and Law (SCL)‘s Junior Lawyers Group. It was a slightly unsual presentation in that David [Smith] often speaks to CIOs and business leaders, or to aspiring young people who will become the next generation of IT leaders.  Meanwhile I was given the, somewhat daunting, challenge of pitching a presentation to a room full of practising lawyers – all of whom work in the field of IT law but who had signed up for the event because they wanted to know more about the technology terms that they come across in their legal work.  Because this was the SCL’s Junior Lawyers group, I considered that most of the people in the room have grown up in a world of IT and so finding a level which was neither too technical nor too basic was my biggest issue.

My approach was to spend some time talking about the way we design solutions: introducing the basic concepts of business, application and technology architectures; talking about the need for clear and stated requirements (particularly non-functionals); the role of service management; and introducing concepts such as cloud computing and virtualisation.

Part way through, I dumped the PowerPoint (Dilbert fans may be aware of the danger that is “PowerPoint poisoning”) and went back to a flip chart to sketch out a view of why we have redundancy in our servers, networks, datacentres, etc. and to talk about thin clients, virtual desktops and other such terms that may come up in IT conversations.

Then, back to the deck to talk about where we see things heading in the near future before my slot ended and the event switched to an exercise in prioritising legal terms in an IT context.

I’m not sure how it went (it will be interesting to see the consolidated feedback from the society) but there was plenty of verbal feedback to suggest the talk was well received, I received some questions (always good to get some audience participation) and from the frantic scribbling on notes at one table I must have been saying something that someone found useful!

The main reason for this blog post is to highlight some of the additional material in the deck that I didn’t present last night.  There are many places where IT and the law are not as closely aligned as we might expect. Examples include:

These items could have been a whole presentation in themselves but I’m interested to hear what the readers of this blog think – are these really as significant as I suggest they are? Or is this just an inevitable consequence of  fast-paced business and technology change rubbing up against a legal profession that’s steeped in tradition and takes time to evolve?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

A trip down memory lane: some old PC hardware prices

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst clearing out my office I found some receipts. Nothing special there, except that these were for IT kit I’d purchased in the late 90s/early 00s. I couldn’t beleive how much I’d paid for some of this stuff:

  • Hayes Accura 56K modem (in December 1997): £139
  • Pretec Compact Flash (CF) 56K modem (in October 2001): AUS$280 (about £112 back then)
  • 64MB CF Card (in July 2001): £68 (and another £16 for a PCMCIA adapter)

That’s not a typo: that really was a 64MB flash card!

How times change!

How fat is really fat?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Regular readers will know about my “Fit at 40” challenge and I have to say it’s pretty hard work right now. I lost the first stone (and a bit) and I built up to the point where I ran my first 10K race (albeit a little slower than I would like) but now it’s into the real hard slog… and I’ve found that I need to focus a little harder (although I’m still heading in the right direction).

I mentioned previously how my friend and former colleague, Garry Martin, is providing motivation with a sponsorship deal that requires me to meet certain targets that we have agreed as realistic but challenging. I will lose the three-and-a-bit stone, and I will run three races of at least 10K before my 40th birthday – but I really do hope to do a little better than that (let’s call it a stretch target!)

Friends and family keep on telling me that I’m looking slimmer, fitter, and healthier, but I put that down to wearing dark colours, a recent holiday leaving me suntanned and breathing in/standing upright! The fact is that the scales tell me I’m not shifting the weight fast enough – and the running is getting harder, not easier. Even so, I decided to take a look at my body fat measurements.

It seems that, in the UK, health professionals are obsessed with the Body Mass Index (BMI) – a simple calculation based on height and weight.  That’s all well and good, but some of us really are “stockier” than others.  According to a simple BMI calculation, I am obese (I am) but, I’ll still be overweight when I reach my goal… in fact, I’d need to get below 12 and a half stone to be “healthy”, despite having not been that weight since I was a teenager (and still growing up, rather than out).

In theory, all of this exercise is helping me to build muscle so it makes more sense to understand just how much of me is fat.  For a while, I was using one of the many calculations available on the ‘net but they really do vary tremendously. So much so that they can only be considered as a guide (not much better than the BMI):

More importantly, the calculation I was using wasn’t suggesting any progress – despite clearly losing weight and being a lot leaner, my wrist and forearm measurements (two of the metrics used) are probably not going to change much.

When I look at my “ideal weight” there are just any many ideas of “ideal”:

  • Based on the Robinson formula (1983), my ideal weight is 156.5 lbs.
  • Based on the Miller formula (1983), my ideal weight is 155.0 lbs.
  • Based on the Devine formula (1974), my ideal weight is 160.9 lbs.
  • Based on the Hamwi formula (1964), my ideal weight is 165.3 lbs.
  • Based on the healthy BMI recommendation, my recommended weight is 128.9 lbs – 174.2 lbs.

Then, last month, I saw that one of the local leisure centres was advertising Body Stat tests (body composition testing) for £5, so I booked myself in.  Using a bio-electrical impedance analysis (BIA) test I was given a range of figures indicating how fat I am and what I should be aiming for. BIA is not without it’s faults but at least by using a commercial product rather than a consumer body fat meter I’ll have increased the chances of accuracy. Just as important was the confirmation that my goal weight is realistic as well as the information that I’m not sufficiently hydrated (drinking more water should help me to lose weight). It also told me my basal metabolic rate (I could also calculate that myself or use a different calculation to take into account exercise and target weight loss), which explains why I’m so damned hungry if I don’t eat enough… and I want to lose weight at a sensible rate or 1-2 pounds a week, not go on a crash diet (I use the Weight Loss Resources site to help with tracking my food and exercise).

Ideally, I would have baselined this when I started the challenge but at least I have the numbers now I’m part way in, and I can check them again as I hit my milestones over the coming months. For what it’s worth, the ideal weight that the Body Stat test came up with was 85-91kg (13st 5lb-14st 5lb) and 13-19% body fat. That seems a lot more realistic than the numbers above and the top end is just below my “Fit at 40” target (so, “Fitter at 41?”)

Just on more thing before I sign off… a quick video from the Dads’ Race at my son’s school sports day last week:

I think I came second (I’m in the green t-shirt). The challenge continues…

From snapshots to social media – the changing picture of photography (@davidfrohlich at #digitalsurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My visits to Surrey seem to be getting more frequent… earlier tonight I was in Reigate, at Canon‘s UK headquarters for another great Digital Surrey talk.

The guest speaker was Professor David Frohlich (@davidfrohlich) from the University of Surrey Digital World Research Centre, who spoke about the changing picture of photography and the relationship between snapshots and social media, three eras of domestic photography, the birth and death of the album and lessons for social media innovation.

I often comment that I have little time for photography these days and all I do is “take snapshots of the kids” but my wife disagrees – she’s far less critical of my work and says I take some good pictures. It was interesting to see a definition of a snapshot though, with it’s origins in 1860’s hunting and “shooting from the hip” (without careful aim!). Later it became “an amateur photograph” so I guess yes, I do mainly take snapshots of the kids!

Professor Frohlich spoke of three values of snapshots (from research by Richard Chalfen in 1987 and Christopher Musello in 1979):

  • Identity.
  • Memory (triggers – not necessarily of when the photograph was taken but of events around that time).
  • Communication.

He then looked at a definition of social media (i.e. it’s a media for social interaction) and suggested that photographs were an early form of social media (since integrated into newer forms)!

Another element to consider is that of innovation and, using Philip Anderson and Michael L Tushman’s 1990 theory as an example, he described how old technological paths hit disruption, there’s then an era of fermentation (i.e. discontinuous development) before a dominant design appears and things stabilise again.  In Geoff Mulgan’s 2007 Process of Social Innovation it’s simply described as new ideas that work, or changing practice (i.e. everyday behaviour).

This led to the discussion of three eras of domestic photography. Following the invention of photography (1830-1840) we saw:

  1. The portrait path [plate images] (1839-1888) including cartes-de-visite (1854-1870)
  2. The Kodak path [roll film] (1888-1990) from the Kodak No. 1 camera in 1888, through the first Polaroid camera (1947), colour film cartridges (1963) which was disrupted with the birth of electronic still video photography (1980-1990)
  3. The digital path (from 1990)

What we find is that the three values of snapshots overlay this perfectly (although the digital era also has elements of identity it is mainly about communication):

Whilst the inventor of the photograph is known (actually Fox-Talbot’s Calotype/Talbottype and Daguerre’s Daguerrotype were both patented in 1839), it’s less well-known who invented the album.

Professor Frohlich explained that the album came into being after people swapped cartes-de-visite (just like today’s photographic business cards!) which became popular around 1850 as a standard portrait sized at 2.5″ x 4″.  These cards could be of individuals, or even famous people (Abraham Lincoln, or Queen Victoria) and in 1854, Disderi’s camera allowed mass production of images with several on a single sheet of paper.  By 1860 albums had been created to store these cards – a development from an earlier past-time of collecting autographs and these albums were effectively filled with images of family, people who visited and famous people – just as Facebook is today!

The Kodak era commenced after George Eastman‘s patent was awarded on 4 September 1888 for a personalised camera which was more accessible, less complex than portrait cameras, and marketed to women around the concept of the Kodak family album.  Filled with images of “high days and holidays” – achievements, celebrations and vacations – these were the albums that most of us know (some of us still maintain) and the concept lasted for the next century (arguably it’s still in existence today, although increasingly marginalised).

Whilst there were some threats (like Polaroid images) they never quite changed the dominant path of photography. Later, as people became more affluent, there were more prints and people built up private archives with many albums and loose photographs (stored in cupboards – just as my many of my family’s are in our loft!).

As photography met ICT infrastructure, the things that we could do with photography expanded but things also became more complex, with a complex mesh involving PCs, printers and digital camera. Whilst some manufacturers cut out the requirement for a computer (with cameras communicating directly to printers), there were two inventions that really changed things: the camera phone and the Internet:

  • Camera phones were already communications-centric (from the phone element), creating a new type of content, that was more about communications than storing memories. In 2002, Turo-Kimmo Lehtonen, Ilpo Koskinen and Esko Kurvine studied the use of mobile digital pictures, not as images for an album but images to say “look where I am”. Whilst technologies such as MMS were not used as much as companies like Nokia expected [largely due to transmission costs imposed by networks] we did see an explosion in online sharing of images.
  • Now we have semi-public sharing, with our friends on Facebook (Google+, etc.) and even wider distribution on Flickr. In addition, photographs have become multimedia objects and Professor Frohlich experimented with adding several types of audio to still images in 2004 as digital story telling.

By 2008, Abigail Durrant was researching photographic displays and intergenerational relationships at home. She looked at a variety of display devices but, critically, found that there was a requirement for some kind of agreement as to what could be displayed where (some kind of meta rules for display).

Looking to the future there are many developments taking place that move beyond the album and on to the archive. Nowadays we have home media collections – could we end up browsing beautiful ePaper books that access our libraries?Could we even see the day where photographic images have a “birthday” and prompt us to remember things (e.g. do you remember when this image was taken, 3 years ago today?)

Professor Frohlich finished up with some lessons for social media innovation:

  • Innovation results from the interaction of four factors: practice; technology; business; and design.
  • Business positioning and social shaping are as important to innovation as technology and it’s design.
  • Social media evolve over long periods of time (so don’t give up if something doesn’t happen quickly).
  • Features change faster than practices and values (social networking is a partial return to identity – e.g. tagging oneself – and not just about communications).
  • Some ideas come around again (like the stereograph developing into 3D cinema).
  • Infrastructure and standards are increasingly key to success (for example, a standard image size).

I do admit to being in admiration of the Digital Surrey team for organising these events – in my three visits I’ve seen some great speakers. Hopefully, I’ve covered the main points from this event but Andy Piper (@andypiper) sums it up for me in a single tweet:

 

Re-architecting for the cloud and lower costs

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the presentations I saw at the recent London Cloud Camp (and again at  Unvirtual) was Justin McCormack (@justinmccormack)’s lightning talk on “re-architecting for the ‘green’ cloud and lower costs” (is re-architecting a word? I don’t think so but re-designing doesn’t mean the same in this context!).

Justin has published his slides but he’s looking at ways to increase the scalability of our existing cloud applications. One idea is to build out parallel computing systems with many power-efficient CPUs (e.g. ARM chips) but Amdahl’s law kicks in so there is no real performance boost by building out – in fact, the line is almost linear so there is no compelling argument.

Instead, Justin argues that we currently write cloud applications that use a lot of memory (Facebook is understood to have around 200TB of memory cache). That’s because memory is fast and disk is slow. But with the advent of solid state devices we have something in between (that’s also low-power).

Instead of writing apps to live in huge RAM caches, we can use less memory, and more flash drives. The model is  not going to be suitable for all applications but it’s fine for “quite big data” – i.e. normal, medium latency applications. A low-power cloud is potentially a low-cost middle ground with huge cost saving potential, if we can write cloud applications accordingly.

Justin plans to write more on the subject soon – keep up with his thoughts on the  Technology of Content blog.

Embedding streaming video content (e.g. YouTube and BBC iPlayer) in a PowerPoint presentation

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the reasons for the huge gap in posts here is that I’ve lost most of the last week to creating a presentation for an event where I’m speaking next week. The event is for The Society for Computers and Law, and I’m taking a look inside the black box of technology.  I was briefed not to expect much technical knowledge as the audience are junior lawyers but I figure they probably know quite a lot already as they do work in IT law, so it’s been pretty difficult to work out what level to pitch things at.  In the end, all I can do is take the event organisers’ advice and hope it works out on the night… we’ll see…

Anyway, I wanted to mix things up a bit and avoid death by PowerPoint.  My slides are pretty pictorial (at least they are if the brand police don’t make me change them to something bland and corporate…) but I wanted to mix in some video too. PowerPoint is quite happy to embed video from a file but it’s a bit harder if you want to embed video that’s streamed from the web, for example from YouTube.

There is a way though (without resorting to installing an add-in)…

I found a post from iSpring Software that goes through the process of manually inserting Flash into PowerPoint 2007 (the version I’m using). There’s more detail in the original post, so I recommend that you read it, but these are the basic steps:

  1. Make sure the Developer tab is visible in the ribbon – if not then turn it on in the Popular tab inside the PowerPoint Options.
  2. On the Developer tab in the ribbon, click the More Controls button (looks like a hammer and screwdriver, crossed over).
  3. Select a Shockwave Flash object and drag a rectangle on the current slide. Don’t worry about the size.
  4. Right click on the control and select Properties.
  5. Go down to the Movie attribute and add the path to the Flash movie. This could be a local file… but it also works with  YouYube URLs (e.g. http://www.youtube.com/v/PPnoKb9fTkA?version=3).

A couple of points to note:

  • You’ll need to save the Presentation as a PowerPoint Macro-Enabled Presentation (which is a .pptm file).
  • The video content may not actually show in the PowerPoint editor, but it’s there if you start the slide show.

So that’s YouTube… but what about other Flash content? Well, you may find that you can extract an appropriate URL from the embed code – and that’s what I did for BBC iPlayer content.  Note that this works for videos embedded on the BBC website, it’s not for videos downloaded to the iPlayer Desktop client.

I don’t normally rate Yahoo Answers but it did turn up a 2 year-old post from someone called wm1995 that gave me the answer.  Get the embed code for the video that you want to embed and look for the FlashVars parameter:

Add that FlashVars code to the end of http://news.bbc.co.uk/player/emp/2.10.7938_7967/9player.swf?embedPageUrl= and you can view the video in a browser (without the rest of the webpage). Similarly, you can take the same URL and use it inside PowerPoint so, the Movie attribute in the Shockwave Flash object will look something like:

http://news.bbc.co.uk/player/emp/2.10.7938_7967/9player.swf?embedPageUrl=”&config_plugin_fmtjLiveStats_pageType=eav1&playlist=http://news.bbc.co.uk/media/emp/7690000/7694400/7694471.xml&config_settings_language=default&config_plugin_fmtjLiveStats_edition=Domestic&holding=http://newsimg.bbc.co.uk/media/images/45149000/jpg/_45149136_aef85a05-4b8d-4e24-8947-055f995745d2.jpg&config_settings_skin=silver&autoPlay=true&embedReferer=http://www.datacenterknowledge.com/archives/2008/10/28/a-look-inside-microsofts-quincy-data-center/&config_settings_autoPlay=true&uxHighlightColour=0xff0000&embedPageUrl=http://news.bbc.co.uk/1/hi/technology/7694471.stm&config_settings_showPopoutButton=false&config=http://news.bbc.co.uk/player/emp/1_1_3_0_0_440234_441894_1/config/default.xml&widgetRevision=323797&config_settings_showShareButton=true&domId=emp_7694471&fmtjDocURI=/1/hi/technology/7694471.stm&legacyPlayerRevision=293203&config_plugin_fmtjLiveStats_pageType=eav6&config_settings_showPopoutButton=false&config_settings_showPopoutCta=false&config_settings_addReferrerToPlaylistRequest=true&config_settings_showFooter=true&config_settings_autoPlay=false”

Where next for Microsoft Kinect?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week’s Fantastic Tavern gamification event included the opportunity to win an Xbox 360 and Kinect and Microsoft’s Andrew Spooner (@andspo), a Creative Evangelist working with Xbox, gave an update on the latest Kinect developments.

Dancing with Invisible LightKinect is now (officially) the World’s fastest-selling consumer electronics device. That’s quite some achievement. I described how Kinect works in an earlier post (3 cameras including one RGB and two 3D motion depth sensors, 4 microphones for directional audio) but Andrew highlighted Audrey Penven’s photographs demonstrating the infra-red light patterns that Kinect uses (Audrey’s image is shown here under a creative commons licence).

After some initial unauthorised hacking, Microsoft has now released an official software development kit for Kinect, allowing Windows developers to develop new uses for the sensor [although there isn’t yet a commercial model for it’s use as a PC peripheral]. Andrew demonstrated some simple skeletal tracking but some of the examples of Kinect hacks (pre-SDK) include:

There are some pretty cool projects there – some are just geeky, but others have real, practical uses. Now that there is an official SDK, Microsoft is suggesting that we’re only limited by our imagination… and that’s probably not to far from the truth!

“All you need is life experience”

[Alex Kipman, “Kinect brainchild”]

Andrew finished off his presentation with a video where Kinect becomes self-aware. Of course, it’s not real… but it is amusing… and it does make you think…