I use the Arduino IDE on a netbook running Ubuntu Linux (other development tools are available) and, a few weeks ago, I stumbed across an interesting-sounding hack to store sketches (Arduino code) in the cloud. The tool to make this happen is David Vondle’s Upload and Retrieve Source project. There’s a good description in Dave’s blog post about the project that clears up parts I struggled with (like: the location of the gistCredentials.txt file, used to store your GitHub credentials and which is found in the ~/.arduino folder on my system; and that you also need the username to be included in a comment inside the sketch). Of course, you’ll need to create an account at GitHub first but you don’t need to know anything about the various git commands to manage source code once you have created the account.
The only downside I’ve found (aside from plain text passwords) is that there is only one project for each Arduino – if you re-use the Arduino with another circuit, the new sketch will be stored in the same gist (although the version control will let you retrieve old sketches, if you know which is which)
A few months ago, Klout suggested I knew something about SharePoint. I laughed at the time but the last few weeks have seen me spending far more time working on a couple of SharePoint-based systems than I would like, so maybe Klout is less of a measure of social influence than one of future gazing…
…anyway, back to the point.
SharePoint lists have the ability to include columns that are lookups from other lists. That’s really useful when building a system with several lists of related items, for example Technologies and Vendors. I use a lookup on Name from the Vendor list to populate the available entries for a Technology item’s Product Vendor column. That’s all fine but I also have Product Family, Product Name and Product Version columns. And then there is a Similar Products column – for which Product Name is not a clear enough lookup – I need a combination of Product Vendor, Product Family (where present), Product Name and Product Version – For example a concatenation of Microsoft + Office + SharePoint Server + 2007.
Creating a calculated column (Full Product Name) for the “in-list” (Product Family, Product Name, Product Version) is straightforward but certain column types (Lookup, Person, Group) are not available for calculated columns. I believe this is because they are based on internal SharePoint IDs, rather than the information displayed.
The workflow (which I named Set Product Vendor) is a basic workflow with one step. That step has no conditions (i.e. it applies to all items) and an Update List Item action which updates the Current Item to set my new column (Product Vendor Plain Text) to the Technologies:Product Vendor value (using the fx button to select).
Now, when creating a new item the lookup is used to select an existing Product Vendor but the plain text version of the field can then be used for calculations, like the Full Product Name. The formula for my Full Product Name column is:
The logic here is that a Product Family value might be empty. If it is, then I take the Product Vendor Plain Text value and add a space, if it’s not I take the Product Vendor Plain Text Value and the Product Family, adding spaces to separate, then (regardless of the previous condition), I add the Product Name and the Product Version (with some additional text around it).
Unfortunately, whilst the workflow can be triggered manually for an item (which is what I did to test it), or on item creation/update, there is no way to run the workflow on all of the existing items in a list. But, like so many things in SharePoint, this also has a workaround. By viewing all items in SharePoint datasheet view and making a minor change to each one (for example adding an entry to a temporary column that can then be removed), this will update the item, triggering the workflow.
The bonus of all of this is that I have now updated my Similar Products column to be a lookup on Full Product Name instead of Product Name and, because it works on the item IDs, the entries are now correct, with no data cleansing required on my part.
Initially perfect for young children (portable, cheap, small keyboard), the screen resolution (1024×576) on my sons’ netbook is becoming too restrictive and, with no Flash Player, some of the main websites they use (Club Penguin, CBeebies) don’t work on the iPad. Setting up an external monitor each time they want to use the computer is not really practical so I needed to find another option – for now that option is recycling my the laptop that my wife replaced a couple of years ago (and which has been in the loft ever since…)
The laptop in question is an IBM ThinkPad T40 – a little long in the tooth but with a 1.5GHz Pentium M and 2GB of RAM it runs OK, although hundreds of Windows XP updates have left it feeling a little sluggish. Vista and 7 are too heavyweight so I decided to install Ubuntu (although I might also give ChromeOS a shot).
Unfortunately, the Ubuntu 12.04 installer stalled, complaining about a lack of hardware support:
This kernel requires the following features not present on the CPU:
pae
Unable to boot – please use a kernel appropriate for your CPU
So much for Linux being a lightweight operating system, suitable for use on old hardware (in fairness, other distributions would have worked). As it happens, it turns out that this is a known issue and there are a few workarounds – the one that worked for me was to use the non-PAE mini.iso installer (I wasn’t prompted to select the generic Linux kernel, but I did have to select the Ubuntu Desktop option).
Last night I wrote a blog post about the “traffic lights” for my office. The trouble with the original setup was that the light was set to a particular state until I edited the code and uploaded a new version to the Arduino.
I was so excited about having something working that I hadn’t actually finished reading the Arduino Programming For Beginners: The Traffic Light Controller post that I referenced. Scroll down a bit further and James Bruce extends his traffic light sequence to simulate someone pressing a button to change the lights (e.g. to cross the road).
I worked on this to come up with something similar – using a variation on James’ wiring diagram (except with 2 LEDs rather than 3, and using pins 11 and 12 for my LEDs and 2 for the pushbutton switch), I now have a setup which waits for the button to be pressed, then sets the red LED, until the button is pressed again (when it goes green), etc.
My code is available on github but here’s the current version (just in case I lose that version as I get to grips with source control…):
/*
Red/green LED indicator with pushbutton control
Based on http://www.makeuseof.com/tag/arduino-traffic-light-controller/
*/
// Pins for coloured LEDs
int red = 11;
int green = 12;
int light = 0;
int button = 2; // Pushbutton on pin 2
int buttonValue = 0; // Button defaults to 0 (LOW)
void setup(){
// Set up pins with LEDs as output devices and switch for input
pinMode(red,OUTPUT);
pinMode(green,OUTPUT);
pinMode(button,INPUT);
}
void loop(){
// Read the value of the pushbutton switch
buttonValue = digitalRead(button);
if (buttonValue == HIGH){
changeLights();
delay(15000); // Wait 15 seconds before reading again
}
}
void changeLights(){
// Change the lights based on current value: 0 is not set; 1 is green; 2 is red
switch (light) {
case 1:
turnLightRed();
break;
case 2:
turnLightGreen();
break;
default:
turnLightRed();
}
}
void turnLightGreen(){
// Turn off the red and turn on the green
digitalWrite(red,LOW);
digitalWrite(green,HIGH);
light = 1;
}
void turnLightRed(){
// Turn off the green and turn on the red
digitalWrite(green,LOW);
digitalWrite(red,HIGH);
light = 2;
}
Now I’m wondering how long it will be before the kids work out that they too can change the status (like pushing the button at a pedestrian crossing!)…
A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:
schema.org – HTML microformats for tagging web content, supported by major search engines
Kasabi dataset archive – Archive of datasets as when Kasabi shutdown was announced – see http://blog.kasabi.com/2012/07/16/archive-of-datasets/ for more info (until 30 July 2012)
“Debranding” a Nokia Lumia phone – I’ve not tried this (YMMV) but looks like a useful reference for anyone whose Lumia has been branded by their mobile operator (mine was bought SIM-only so isn’t) to get it back to a default Nokia state.
Scott’s solution uses something called a Busylight but a) I’m too tight to spend €49 on this and b) the geek in me thinks “surely I can rig up something using a few LEDs?”. One of the comments on Scott’s post led me to an open source project called RealStatus but that uses an expensive USB HID for the LEDs so doesn’t really move me much further forward…
I decided that I should use my Arduino instead… with the added bonus that involving the children in the project might get them “onboard” too… the trouble is that my electronics prototyping skills are still fairly rudimentary.
As it happens, that’s not a problem – I found an Arduino traffic light program for beginners and, as I don’t have a yellow LED right now, I adapted it to what I do have – simple red/green status (my son and I had fun trying different resistors to adjust the brightness of the LEDs)
You can see the breadboard view here (generated with Fritzing – I’m still working out how to make this into a schematic) and below is my Arduino code (which is also available on github – although I might have worked on it a bit by the time you read this):
// Pins for coloured LEDs
int red = 12;
int green = 13;
void setup(){
// Set up pins as output devices
pinMode(red,OUTPUT);
pinMode(green,OUTPUT);
}
void loop(){
// Change the lights
// turnLightRed();
turnLightGreen();
}
void turnLightGreen(){
// Turn off the red and turn on the green
digitalWrite(red,LOW);
digitalWrite(green,HIGH);
}
void turnLightRed(){
// Turn off the green and turn on the red
digitalWrite(green,LOW);
digitalWrite(red,HIGH);
}
It’s pretty simple really – I just call the turnLightRed() or turnLightGreen() function according to whether I am ready to accept visitors. In itself, that’s a bit limited but the next step will be to work out how to send commands to the Arduino over USB (for some integration with my instant messaging client, perhaps) or even using a messaging service (Twitter?) and some network connectivity… more research required!
Tonight’s Digital Surrey was, as usual, a huge success with a great speaker (Google’s @EdParsons) in a fantastic venue (Farnham Castle). Ed spoke about the future of geospatial data – about annotating our world to enhance the value that we can bring from mapping tools today but, before he spoke of the future, he took a look at how we got to where we are.
What is geospatial information? And how did we get to where we are today?
Geospatial information is very visual, which makes it powerful for telling stories and one of the most famous and powerful images is that of the Earth viewed from space – the “blue marble”. This emotive image has been used many times but has only been personally witnessed by around 20 people, starting with the Apollo 8 crew, 250000 miles from home, looking at their own planet. We see this image with tools like Google Earth, which allows us to explore the planet and look at humankind’s activities. Indeed about 1 billion people use Google Maps/Google Earth every week – that’s about a third of the Internet population, roughly equivalent to Facebook and Twitter combined [just imagine how successful Google would be if they were all Google+ users…]. Using that metric, we can say that geospatial data is now pervasive – a huge shift over the last 10 years as it has become more accessible (although much of the technology has been around longer).
The annotated world is about going beyond the image and pulling out info otherwise invisible information, so, in a digital sense, it’s now possible to have map of 1:1 scale or even beyond. For example, in Google Maps we can look at StreetView and even see annotations of buildings. This can be augmented with further information (e.g restrictions in the directions in which we can drive, details about local businesses) to provide actionable insight. Google also harvests information from the web to create place pages (something that could be considered ethically dubious, as it draws people away from the websites of the businesses involved) but it can also provide additional information from image recognition – for example identifying the locations of public wastebins or adding details of parking restrictions (literally from text recognition on road signs). The key to the annotated web is collating and presenting information in a way that’s straightforward and easy to use.
Using other tools in the ecosystem, mobile applications can be used to easily review a business and post it via Google+ (so that it appears on the place page); or Google MapMaker may be used by local experts to add content to the map (subject to moderation – and the service is not currently available in the UK…).
So, that’s where we are today… we’re getting more and more content online, but what about the next 10 years?
A virtual (annotated) world
Google and others are building a virtual world in three dimensions. In the past, Google Earth pulled data from many sets (e.g. building models, terrain data, etc.) but future 3D images will be based on photographs (just as, apparently, Nokia have done for a while). We’ll also see 3D data being using to navigate inside buildings as well as outside. In one example, Google is working with John Lewis, who have recently installed Wi-Fi in their stores – to use this to determine a user’s location determination and combine this with maps to navigate the store. The system is accurate to about 2-3 metres [and sounds similar to Tesco’s “in store sat-nav” trial] and apparently it’s also available in London railway stations, the British Museum, etc.
Ed made the point that the future is not driven by paper-based cartography, although there were plenty of issues taken with this in the Q&A later, highlighting that we still use ancient maps today, and that our digital archives are not likely to last that long.
Moving on, Ed highlighted that Google now generates map tiles on the fly (it used to take 6 weeks to rebuild the map) and new presentation technologies allow for client-side rendering of buildings – for example St Pauls Cathedral, in London. With services such as Google Now (on Android), contextual info may be provided, driven by location and personality
With Google’s Project Glass, that becomes even more immersive with augmented reality driven by the annotated world:
Although someone also mentioned to me the parody which also raises some good points:
Seriously, Project Glass makes Apple’s Siri look way behind the curve – and for those who consider the glasses to be a little uncool, I would expect them to become much more “normal” over time – built into a normal pair of shades, or even into prescription glasses… certainly no more silly than those Bluetooth earpieces the we used to use!
Of course, there are privacy implications to overcome but, consider what people share today on Facebook (or wherever) – people will share information when they see value in it.
Big data, crowdsourcing 2.0 and linked data
At this point, Ed’s presentation moved on to talk about big data. I’ve spent most of this week co-writing a book on this topic (I’ll post a link when it’s published) and nearly flipped when I heard the normal big data marketing rhetoric (the 3 Vs) being churned out. Putting aside the hype, Google should know quite a bit about big data (Google’s search engine is a great example and the company has done a lot of work in this area) and the annotated world has to address many of the big data challenges including:
Data integration.
Data transformation.
Near-real-time analysis using rules to process data and take appropriate action (complex event processing).
Semantic analysis.
Historical analysis.
Search.
Data storage.
Visualisation.
Data access interfaces.
Moving back to Ed’s talk, what he refers to as “Crowdsourcing 2.0” is certainly an interesting concept. Citing Vint Cerf (Internet pioneer and Google employee), Ed said that there are an estimated 35bn devices connected to the Internet – and our smartphones are great examples, crammed full of sensors. These sensors can be used to provide real-time information for the annotated world: average journey times based on GPS data, for example; or even weather data if future smartphones were to contain a barometer.
Linked data is another topic worthy of note, which, at its most fundamental level is about making the web more interconnected. There’s a lot of work been done into ontologies, categorising content, etc. [Plug: I co-wrote a white paper on the topic earlier this year] but Google, Yahoo, Microsoft and others are supporting schema.org as a collection of microformats, which are tags that websites can use to mark up content in a way that’s recognised by major search providers. For example, a tag like <span itemprop="addresscountry">Spain</span> might be used to indicate that Spain is a country with further tags to show that Barcelona is a city, and that Noucamp is a place to visit.
Ed’s final thoughts
Summing up, Ed reiterated that paper maps are dead and that they will be replaced with more personalised information (of which, location is a component that provides content). However, if we want the advantages of this, we need to share information – with those organisations that we trust and where we know what will happen with that info.
Mark’s final thoughts
The annotated world is exciting and has stacks of potential if we can overcome one critical stumbing point that Ed highliughted (and I tweeted):
Unfortunately, there are many who will not trust Google – and I find it interesting that Google is an advocate of consuming open data to add value to its products but I see very little being put back in terms of data sets for others to use. Google’s argument is that it spent a lot of money gathering and processing that data; however it could also be argued that Google gets a lot for free and maybe there is a greater benefit to society in freely sharing that information in a non-proprietary format (rather than relying on the use of Google tools). There are also ethical concerns with Google’s gathering of Wi-Fi data, scraping website content and other such issues but I expect to see a “happy medium” found, somewhere between “Don’t Be Evil” and “But we are a business after all”…
Thanks as always to everyone involved in arranging and hosting tonight’s event – and to Ed Parsons for an enlightening talk!
Those who were watching my Twitter stream last Friday and Saturday will have followed my saga with Apple and their apparent disregard for customer service or the law when my iPad developed a fault… “Apple?” you say, “but aren’t they renowned for their fantastic customer service?”. Well, they do have a reputation but my experience suggests it’s not deserved, at least not here in the UK…
I waited a few days before writing this post as anyone who criticises Apple is laid open to a barrage of abuse. Even so, I thought it was appropriate to share – and, by “cooling off”, I’m hoping to be objective.
What’s the problem?
A few months ago, I noticed a greenish glow on a small portion of the screen on my iPad, which I purchased in July 2010. It was particularly visible on dark areas, when the brightness is turned up (e.g. when using the iPad in a dark room). So, I booked an appointment at the Genius Bar in the Milton Keynes Apple Store to see what could be done to repair/replace the defective screen. I arrived on time and, whilst it was certainly busy, there were lots of blue t-shirts doing what, to a bystander, appeared to be very little. I’m sure they all had their own jobs but, after waiting 20 minutes past my appointment, I was seen, not by one of the staff who were at the Genius Bar, but by the guy who had been performing some kind of co-ordination role on the shop floor until that point. He took my iPad away, then came back to say that it was over a year old and so out of warranty – repair wasn’t an option and a refurbished replacement would cost £199. I was given the option of speaking to a Manager and I did, but he was equally unhelpful – and apparently unwilling to move an inch, even when I pointed out that the UK’s Sale of Goods Act gives me some rights here…
More support required
I went home and found a statement on the Apple website about Apple Products and EU Statutory Warranty, which directed me to call AppleCare. I opened a support case and, the next morning, I spoke to an Apple representative who listened, logged the call details, but ultimately advised me to contact the point of purchase (the Apple Store in Solihull). Solihull is an hour’s drive away so I called the store, who said I could visit any Apple Retail location and I headed to Milton Keynes, where I had made a Genius Bar appointment in anticipation.
Five minutes before my appointment AppleCare called and said they had spoken to store and could handle a “consumer law” complaint on my behalf, and that I didn’t need to go to store. Ten minutes after that they called again and said they couldn’t after all and 15 minutes later they said EU Consumer Law doesn’t apply in the UK (it doesn’t – but the UK Sale of Goods Act does!) and that I should contact the local Trading Standards department. By then I was at the store again, where I spent the next couple of hours (including almost an hour waiting to be seen as AppleCare’s previous advice meant I’d missed my Genius Bar appointment and I was on standby), eventually being convinced to part with money to replace my iPad (more on that in a moment).
So how is this Apple’s problem?
Those in the US and elsewhere may well be thinking, “so you wanted Apple to repair or replace a product that was out of warranty – are you for real?” but in Europe, consumer law is on our side.
The UK hasn’t adopted this EU regulation because our own laws provide even better cover – The Sale of Goods Act gives consumers up to six years to pursue claims. Although UK law does not specify how long a product should last (all products and manufacturers are different), a product is considered faulty if it stops working properly in less time than a reasonable person would expect the product to last. A screen defect within two years does not sound like something that Apple (or any reasonable person) would expect, and so I believe that Apple should have offered me a free repair or replacement with the same or similar product at no cost.
Instead, Apple tried to pass the buck. Initially I was batted back and forth between AppleCare (Apple’s support channel) and Apple Retail (who sold me the iPad). At one point I was advised to contact the actual store where my iPad was purchased (not my local store). Finally, Apple Retail attempted to pass me on to my local Trading Standards department and when I said that the problem was between Apple and myself, not with Milton Keynes Council (the Trading Standards authority in this case), the store manager started talking about me pursuing action in the small claims court, in a “David and Goliath” fashion, playing the part of “the small man” against the big company (and yes, those are quotes!). The arrogance of Apple’s retail management and of the company as a whole, which seems to put itself above the law is, frankly, astounding.
A compromise?
Eventually, one of the Managers in the Apple Store in Milton Keynes offered me a replacement iPad but it cost me £69 – a discount from the £199 originally quoted to the price that I would have paid for AppleCare, if I had taken it at the time of purchase. I didn’t take AppleCare because consumer law covers me against product defects, my home insurance covers me against accidental damage, and the Internet covers me against technical support. In short, I shouldn’t need to buy an extended warranty (AppleCare), and I’m still unhappy at having paid for something that should have been free of charge, if only Apple was prepared to accept the rule of law.
“Apple set themselves up as the tech company that is way ahead of everyone else in the industry, but their after sales service is worse than mediocre. I used to be a fanboy.”
I think that just about sums it up!
I’m still tempted to contact the Trading Standards department at Milton Keynes Council – and maybe I will sue Apple for costs but, to be honest, my time is worth more than the £69 I paid for the replacement iPad and I’ve already spent several hours speaking to AppleCare, travelling back and forth to my local Apple Store, or hanging around waiting to be seen. Do I really need that hassle? No, I don’t, but there is a principle at stake here – the world’s largest company appears to be ignoring the rule of law – so maybe I should take this further. If I do, I’m sure you’ll read about it here…
Please don’t misunderstand me – nine times out of ten – clip-art, over-use of slide animations/transitions and sound effects in PowerPoint presentations are naff. No – worse than that – often completely unnecessary and, in some ways, reminding me of the early days of desktop publishing, when it seemed to be necessary to use 20 fonts on a single page… just because they were there
Thankfully these days (most) people have reined themselves in and seem to steer clear of the “embellishments”, maybe using a single transformation style throughout a whole deck and the occasional build, perhaps with the odd animation – and some decent stock images. Even so, I recently found myself wanting to use sound in a PowerPoint animation.
I could work out how to add the sound to the slide transition but these was nothing obvious for individual animation steps. After some googling, it turns out that the trick is to select the barely-noticable dropdown arrow on a custom animation, and then click Effect Options, after which the option to enhance the animation with sound will become visible. I was using PowerPoint 2007 – it might be different with other versions but, be warned, with great power comes great responsibility. Or something like that.
Long-time readers of my blog will know that I used to manage the Fujitsu UK and Ireland CTO Blog (which we’ve recently closed, but have left the content in place for posterity) and I’m still getting the comment notifications (mostly spam). Many of the posts have HTTP 301 redirects to either mine or David Smith‘s blogs (I found a great WordPress plugin for that – Redirection) but, for those that remain, I wanted to turn off comments. Doing this individually for each post seemed unnecessarily clunky but there is, apparently, no way to do this from the WordPress user interface (with database access it would have been straightforward but I don’t have that level of access).
There is a plug-in that globally disables all comments – named, rather aptly, Disable Comments – except that the blog is part of a multi-site (network) install and I’m not sure what the broader impact would be…
No bother, I found a workaround – simply set all of the posts to close comments after a certain number of days. The theme that someone has applied to the site (since I stopped working with it) doesn’t seem to respect that, and still leaves a comment button visible, but anyone with a well-developed theme should be OK…