Tag Archives: Google

Technology

Google Reader is retired next week – have you switched to Feedly?

Next week, Google is set to retire Google Reader. When I wrote this post (back in March), almost 75% of the subscribers to my feed (already dwindling, partly as a result of Google algorithm changes that seem to penalise independent views in favour of branded content) came via Google Feedfetcher (used by Reader to grab RSS or Atom feeds), suggesting that lots of you use Google Reader.

Hopefully you’ve all found a way to move forward but, if you haven’t, I recommend checking out Feedly.

If you migrate before Google turns off Reader, it’s a one-click migration (just log into Feedly with your Google account) – I did it weeks ago and haven’t looked back since!

Here are a couple of links that might be useful:

Now I need to look at moving my site’s RSS away from Feedburner, before Google kills that off too (I’m sure it’s only a matter of time…)

Technology

The annotated world – the future of geospatial technology? (@EdParsons at #DigitalSurrey)

Tonight’s Digital Surrey was, as usual, a huge success with a great speaker (Google’s @EdParsons) in a fantastic venue (Farnham Castle).  Ed spoke about the future of geospatial data – about annotating our world to enhance the value that we can bring from mapping tools today but, before he spoke of the future, he took a look at how we got to where we are.

What is geospatial information? And how did we get to where we are today?

Geospatial information is very visual, which makes it powerful for telling stories and one of the most famous and powerful images is that of the Earth viewed from space – the “blue marble”. This emotive image has been used many times but has only been personally witnessed by around 20 people, starting with the Apollo 8 crew, 250000 miles from home, looking at their own planet. We see this image with tools like Google Earth, which allows us to explore the planet and look at humankind’s activities. Indeed about 1 billion people use Google Maps/Google Earth every week – that’s about a third of the Internet population, roughly equivalent to Facebook and Twitter combined [just imagine how successful Google would be if they were all Google+ users...]. Using that metric, we can say that geospatial data is now pervasive – a huge shift over the last 10 years as it has become more accessible (although much of the technology has been around longer).

The annotated world is about going beyond the image and pulling out info otherwise invisible information, so, in a digital sense, it’s now possible to have map of 1:1 scale or even beyond. For example, in Google Maps we can look at StreetView and even see annotations of buildings. This can be augmented with further information (e.g restrictions in the directions in which we can drive, details about local businesses) to provide actionable insight. Google also harvests information from the web to create place pages (something that could be considered ethically dubious, as it draws people away from the websites of the businesses involved) but it can also provide additional information from image recognition – for example identifying the locations of public wastebins or adding details of parking restrictions (literally from text recognition on road signs). The key to the annotated web is collating and presenting information in a way that’s straightforward and easy to use.

Using other tools in the ecosystem, mobile applications can be used to easily review a business and post it via Google+ (so that it appears on the place page); or Google MapMaker may be used by local experts to add content to the map (subject to moderation – and the service is not currently available in the UK…).

So, that’s where we are today… we’re getting more and more content online, but what about the next 10 years?

A virtual (annotated) world

Google and others are building a virtual world in three dimensions. In the past, Google Earth pulled data from many sets (e.g. building models, terrain data, etc.) but future 3D images will be based on photographs (just as, apparently, Nokia have done for a while). We’ll also see 3D data being using to navigate inside buildings as well as outside. In one example, Google is working with John Lewis, who have recently installed Wi-Fi in their stores – to use this to determine a user’s location determination and combine this with maps to navigate the store. The system is accurate to about 2-3 metres [and sounds similar to Tesco's "in store sat-nav" trial] and apparently it’s also available in London railway stations, the British Museum, etc.

Father Ted would not have got lost in the lingerie department if he had Google's mapping in @! says @ #DigitalSurrey
@markwilsonit
Mark Wilson

Ed made the point that the future is not driven by paper-based cartography, although there were plenty of issues taken with this in the Q&A later, highlighting that we still use ancient maps today, and that our digital archives are not likely to last that long.

Moving on, Ed highlighted that Google now generates map tiles on the fly (it used to take 6 weeks to rebuild the map) and new presentation technologies allow for client-side rendering of buildings – for example St Pauls Cathedral, in London. With services such as Google Now (on Android), contextual info may be provided, driven by location and personality

With Google’s Project Glass, that becomes even more immersive with augmented reality driven by the annotated world:

Although someone also mentioned to me the parody which also raises some good points:

Seriously, Project Glass makes Apple’s Siri look way behind the curve – and for those who consider the glasses to be a little uncool, I would expect them to become much more “normal” over time – built into a normal pair of shades, or even into prescription glasses… certainly no more silly than those Bluetooth earpieces the we used to use!

Of course, there are privacy implications to overcome but, consider what people share today on Facebook (or wherever) – people will share information when they see value in it.

Big data, crowdsourcing 2.0 and linked data

At this point, Ed’s presentation moved on to talk about big data. I’ve spent most of this week co-writing a book on this topic (I’ll post a link when it’s published) and nearly flipped when I heard the normal big data marketing rhetoric (the 3 Vs)  being churned out. Putting aside the hype, Google should know quite a bit about big data (Google’s search engine is a great example and the company has done a lot of work in this area) and the annotated world has to address many of the big data challenges including:

  • Data integration.
  • Data transformation.
  • Near-real-time analysis using rules to process data and take appropriate action (complex event processing).
  • Semantic analysis.
  • Historical analysis.
  • Search.
  • Data storage.
  • Visualisation.
  • Data access interfaces.

Moving back to Ed’s talk, what he refers to as “Crowdsourcing 2.0″ is certainly an interesting concept. Citing Vint Cerf (Internet pioneer and Google employee), Ed said that there are an estimated 35bn devices connected to the Internet – and our smartphones are great examples, crammed full of sensors. These sensors can be used to provide real-time information for the annotated world: average journey times based on GPS data, for example; or even weather data if future smartphones were to contain a barometer.

Linked data is another topic worthy of note, which, at its most fundamental level is about making the web more interconnected. There’s a lot of work been done into ontologies, categorising content, etc. [Plug: I co-wrote a white paper on the topic earlier this year] but Google, Yahoo, Microsoft and others are supporting schema.org as a collection of microformats, which are tags that websites can use to mark up content in a way that’s recognised by major search providers. For example, a tag like <span itemprop="addresscountry">Spain</span> might be used to indicate that Spain is a country with further tags to show that Barcelona is a city, and that Noucamp is a place to visit.

Ed’s final thoughts

Summing up, Ed reiterated that paper maps are dead and that they will be replaced with more personalised information (of which, location is a component that provides content). However, if we want the advantages of this, we need to share information – with those organisations that we trust and where we know what will happen with that info.

Mark’s final thoughts

The annotated world is exciting and has stacks of potential if we can overcome one critical stumbing point that Ed highliughted (and I tweeted):

In order to create a more useful, personal, contextual web, organisations need to gain our trust to share our information #DigitalSurrey
@markwilsonit
Mark Wilson

Unfortunately, there are many who will not trust Google – and I find it interesting that Google is an advocate of consuming open data to add value to its products but I see very little being put back in terms of data sets for others to use. Google’s argument is that it spent a lot of money gathering and processing that data; however it could also be argued that Google gets a lot for free and maybe there is a greater benefit to society in freely sharing that information in a non-proprietary format (rather than relying on the use of Google tools). There are also ethical concerns with Google’s gathering of Wi-Fi data, scraping website content and other such issues but I expect to see a “happy medium” found, somewhere between “Don’t Be Evil” and “But we are a business after all”…

Thanks as always to everyone involved in arranging and hosting tonight’s event – and to Ed Parsons for an enlightening talk!

Technology

Useful to know: Google Chrome has its own task manager

Earlier today, I was wondering why I was seeing a “missing plug-in” message in Google Chrome on a number of websites that I regularly view. I loaded the same websites in Internet Explorer and they worked OK, so something had obviously gone screwy inside Chrome. I could have guessed – it was Flash, although normally I get a yellow bar to tell me that has stopped working.

I rebooted my PC yesterday, so I don’t plan to do that again for another couple of weeks (until the memory leak that one of my apps has gets so bad that I’m forced to…) but I googled missing plug-in google chrome to see what comes up. As it happens, Chrome has a task manager built in (press shift and escape).  After ending the Shockwave Flash process, I refreshed the offending page(s) and everything worked as it should.

By then I was intrigued by the stats for nerds link which takes me to chrome://memory-redirect/ - an internal page that contains a breakdown of activity by process (including which tabs are managed by which processes) – which would have been handy to know about when Chrome had gobbled up a good chunk of my RAM earlier this week:

Any tips for restricting Chrome's memory usage? Running ~60-70% CPU and ~80-85% RAM on a 4GB Windows x64 system: http://t.co/dDMehXbN
@markwilsonit
Mark Wilson

If anyone knows a similar memory management function for Internet Explorer, I’d be pleased to hear it as the relationship between tabs and processes seems to be a black art (and it may help to chase down problematic tabs) – I’ve tried Process Explorer and Windows Task Manager in the past, but it would be useful IE functionality…

Technology

Designing a private cloud infrastructure

A couple of months ago, Facebook released a whole load of information about its servers and datacentres in a programme it calls the Open Compute Project. At around about the same time, I was sitting in a presentation at Microsoft, where I was introduced to some of the concepts behind their datacentres.  These are not small operations – Facebook’s platform currently serves around 600 million users and Microsoft’s various cloud properties account for a good chunk of the Internet, with the Windows Azure appliance concept under development for partners including Dell, HP, Fujitsu and eBay.

It’s been a few years since I was involved in any datacentre operations and it’s interesting to hear how times have changed. Whereas I knew about redundant uninterruptible power sources and rack-optimised servers, the model is now about containers of redundant servers and the unit of scale has shifted.  An appliance used to be a 1U (pizza box) server with a dedicated purpose but these days it’s a shipping container full of equipment!

There’s also been a shift from keeping the lights on at all costs, towards efficiency. Hardly surprising, given that the IT industry now accounts for around 3% of the world’s carbon emissions and we need to reduce the environmental impact.  Google’s datacentre design best practices are all concerned with efficiency: measuring power usage effectiveness; measuring managing airflow; running warmer datacentres; using “free” cooling; and optimising power distribution.

So how do Microsoft (and, presumably others like Amazon too) design their datacentres? And how can we learn from them when developing our own private cloud operations?

Some of the fundamental principles include:

  1. Perception of infinite capacity.
  2. Perception of continuous availability.
  3. Drive predictability.
  4. Taking a service provider approach to delivering infrastructure.
  5. Resilience over redundancy mindset.
  6. Minimising human involvement.
  7. Optimising resource usage.
  8. Incentivising the desired resource consumption behaviour.

In addition, the following concepts need to be adopted to support the fundamental principles:

  • Cost transparency.
  • Homogenisation of physical infrastructure (aggressive standardisation).
  • Pooling compute resource.
  • Fabric management.
  • Consumption-based pricing.
  • Virtualised infrastructure.
  • Service classification.
  • Holistic approach to availability.
  • Computer resource decay.
  • Elastic infrastructure.
  • Partitioning of shared services.

In short, provisioning the private cloud is about taking the same architectural patterns that Microsoft, Amazon, et al use for the public cloud and implementing them inside your own data centre(s). Thinking service, not server to develop an internal infrastructure as a service (IaaS) proposition.

I won’t expand on all of the concepts here (many are self-explanitory), but some of the key ones are:

  • Create a fabric with resource pools of compute, storage and network, aggregated into logical building blocks.
  • Introduced predictability by defining units of scale and planning activity based on predictable actions (e.g. certain rates of growth).
  • Design across fault domains – understand what tends to fail first (e.g. the power in a rack) and make sure that services span these fault domains.
  • Plan upgrade domains (think about how to upgrade services and move between versions so service levels can be maintained as new infrastructure is rolled out).
  • Consider resource decay – what happens when things break?  Think about component failure in terms of service delivery and design for that. In the same way that a hard disk has a number of spare sectors that are used when others are marked bad (and eventually too many fail, so the disk is replaced), take a unit of infrastructure and leave faulty components in place (but disabled) until a threshold is crossed, after which the unit is considered faulty and is replaced or refurbished.

A smaller company, with a small datacentre may still think in terms of server components – larger organisations may be dealing with shipping containers.  Regardless of the size of the operation, the key to success is thinking in terms of services, not servers; and designing public cloud principles into private cloud implementations.

Technology

Office 365 message filtering (and a horrible little bug that leaves email addresses exposed…)

One of my concerns with my recent switch from Google Apps Mail to Microsoft Office 365 was about spam email. You see, I get none.  Well, when I say I get none, I get plenty but it’s all trapped for me. With no effort on my part. Only a handful of missed spam messages in the last 2 or 3 years and almost as few false positives too.

I’ve had the same email address for about 12 years now (I think), and it’s been used all over the web. Some of my friends are more particular though – and, perhaps understandably, were annoyed when I accidentally emailed around 40 people with e-mail addresses visible in the To: field today. Except that I hadn’t intended to.

I think I’ve found a bug in Office 365′s Outlook Web App (at least, I hope it’s not closed as “by design”, assuming I find out how to file a bug report). If I send to a distribution group, it automatically expands the addresses and displays them to all recipients. That’s bad.

The annoying thing is that, previously, I had been BCCing the recipients. I have a feeling that at least one organisation was rejecting my mail because there was nothing in the To: field (although it didn’t like Google’s propensity to send mail from one domain “on behalf of” another address either), so I thought I’d use a list instead and the recipients would see the list name, rather than the actual email addresses. Thankfully it was only sent to my closest freinds and family (although that’s not really the point).

So, back to spam and Office 365 – does it live up to my previous experience with Google Apps Mail? Actually, yes I think it does. I’ve had to teach it a couple of safe senders and block a couple of others, but it really was just a handful and it’s settled down nicely.

All of Microsoft’s cloud-based e-mail services use Forefront Online Protection for Exchange. Enterprise administrators have some additional functionality (adapting SCL thresholds, etc.) but things seem to be working pretty well on my small business account too. Digging around in the various servers that the mail passes through sees hosts at bigfish.com and frontbridge.com – Frontbridge was an aquisition that has become part of Exchange Hosted Services (and it started out as Bigfish Communications) – so the technology is established, and another Microsoft property (Hotmail) is a pretty good test bed to find and filter the world’s spam.

Uncategorized

Attempting to track RSS subscribers on a WordPress blog

As well as my own website (which has precious little content these days due to my current workload), I also manage the Fujitsu UK and Ireland CTO Blog. Part of that role includes keeping an eye on a number of metrics to make sure that people are actually interested in what we have to say (thankfully, they seem to be…). Recently though, I realised that, whilst I’m tracking visitors to the blog, I’m missing hits on the RSS feed (because it’s not actually a page with the tracking script included) - and that’s a problem.

There are ways around this (I use Google Feedburner on my own blog, or it’s possible to put a dummy page with a meta refresh in front of the feed to pick up some metrics) but they have their own issues (for example the meta refresh methods breaks autodiscovery for some RSS readers) and will only help with new subscribers going forwards, not with my legacy issue of how many subscribers do I have right now.

There is another approach though: using a popular web-based RSS subscription service like Google Reader to see how many subscribers it tracks for our feed (the same metrics are available from Google’s Webmaster Tools).  The trouble is, that’s not all of the subscribers (for example, a good chunk of people use Outlook to manage their feeds, or other third-party RSS readers). If I use my own blog as an example, Google Reader shows that I have 247 subscribers but Feedburner says I have 855.  Those subscribers come from all manner of feed readers and aggregators, email subscription services and web browsers (Firefox accounts for almost 20% of them) so it’s clear that I’m not getting the whole picture from Google’s statistics. 

Google Reader Subscribers

Google Feedburner Subscribers

Does anyone have any better ideas for getting some subscriber stats for RSS feeds on a WordPress blog using Google Analytics? Or maybe from the server logs?

Uncategorized

First signs of a tablet strategy at Microsoft

I’ve been pretty critical of Microsoft’s tablet strategy. As recently as last October they didn’t appear to have one and Steve Ballmer publicly ridiculed customers using a competitor devices. Whenever I mentioned this, the ‘softies would switch into sales mode and say something like “oh but we’re the software company, we don’t make devices” to which I’d point out that they do have a mobile operating system (Windows Phone 7), and an application store, but that they don’t allow OEMs to use it on a tablet form factor.

But it seems that things are changing in Redmond. Or in Reading at least.

Ballmer got a kicking from the board (deservedly so) for his inability to develop Microsoft’s share of the mobile market and it seems that Redmond is open to ideas from elsewhere in the company to develop a compelling Windows-based tablet offering. A few days ago, I got the chance to sit down with one of the Slate vTeam in the UK subsidiary to discuss Microsoft’s tablet (they prefer “slate”) strategy and it seems that there is some progress being made.

Whilst Windows 8 (or Windows vNext as Microsoft prefer to refer to it) was not up for discussion – Microsoft’s Jamie Burgess was happy to discuss the work that Microsoft is doing around slates that run Windows 7.  Ballmer alluded to work with OEMs in his “big buttons” speech and there are a number of devices hitting the market now which attempt to overcome the limitations of Microsoft’s platform. The biggest limitation is the poor touch interface provided by the operating system itself (with issues that are far more fundamental than just “big buttons”).  There seems little doubt that the next version of Windows will have better slate support but we won’t see that until at least 2012 – and what about the current crop of Windows 7-based devices?

[At this point I need to declare a potential conflict of interest – I work for Fujitsu, although this is my personal blog and nothing written here should be interpreted as representing the views of my employer – for what it’s worth, I have been just as critical of Windows slates when talking to Fujitsu Product Managers but, based on a recent demonstration of a pre-production model, I do actually believe that they have done a good job with the Stylistic Q550, especially when considering the current state of Windows Touch]

Need to do “something”

Microsoft has realised that doing nothing about slates does not win market share – in fact it loses mind share – every iPad sold helps Apple to grow because people start using iTunes, then they buy into other parts of the Apple ecosystem (maybe a Mac), etc.

Noting that every enterprise user is also a consumer, Microsoft believes enterprise slates will sneak back into the home, rather than consumer devices becoming commonplace in the enterprise. That sounds like marketing spin to me, but they do have a point that there is a big difference between a CIO who wants to use his iPad at work and that same CIO saying that he wants 50,000 of those devices deployed across the organisation.

Maybe it was because I was talking to the UK subsidiary, rather than “Corp” but Microsoft actually seems to acknowledge that Apple is currently leading on tablet adoption. Given their current position in the market, Microsoft’s strategy is to leverage its strength from the PC marketplace – the partner ecosystem.  Jamie Burgess told me how they are working to bring together Independent Software Vendors (ISVs), System Integrators (SIs) and device manufacturers (OEMs) to create “great applications” running on “great devices” and deployed by “great partners”, comparing this with the relatively low enterprise maturity of Apple and their resellers.

Addressing enterprise readiness

I could write a whole post on the issues that Google has (even if they don’t yet know it) with Android: device proliferation is a wonderful thing, until you have to code for the lowest common denominator (just ask Microsoft, with Windows Mobile – and, to some extent with Windows too!) and Google is now under attack for it’s lack of openness in an open-source product.  But the big issue for the enterprise is security – and I have to agree with Microsoft on this: neither Apple nor Google seem to have got that covered. Here are some examples:

  • Encryption is only as strong as its weakest link – root access to devices (such as jailbroken iPhones) is pretty significant (6 minutes to break into an encrypted device) and Apple has shown time and time again that it is unable to address this, whilst Google sees this level of access to Android devices as part of its success model.
  • And what if I lose my mobile device? USB attached drives provide a great analogy in that encryption (e.g. Microsoft BitLocker)  is a great insurance policy – you don’t think you really need it until a device goes missing and you realise that no-one can get into it anyway… then you breathe a big sigh of relief.

After security we need to think about management and support:

  • Android 3 and iOS have limited support for device lock down whilst a Windows device has thousands of group policy settings. Sure, group policy is a nightmare in itself, but it is at least proven in the enterprise.
  • Then there’s remote support – I can take screenshots on my iPad, but I can’t capture video and send it to a support technician to show them an application issue that they are having trouble replicating – Windows 7’s problem steps recorder allows me to do this.
  • There is no support for multiple users, so I can’t lock a device down for end users, but open up access for administrators to support the device – or indeed allow a device to be shared between users in any way that provides accountability.

Windows 7 has its problems too: it’s a general purpose operating system, that’s not designed to run on mobile hardware; it lacks the ability to instantly resume from standby; and touch support (particularly the soft keyboard) is terrible (unless an application is written specifically to use touch) Even so, when you consider its suitability for enterprise use, it’s clear that Windows does have some advantages.

Ironically, Microsoft also cites a lack of file system access as restricting the options for collaboration using an iOS device. Going back to the point about security only being as strong as the weakest link, I’d say that restricting access to the file system is a good thing (if only there weren’t the jailbreak issues at a lower level!). Admittedly, it does present some challenges for users but applications such as Dropbox help me over that as I can store data within the app, such as presentations for offline playback.

The Windows Optimised Desktop

At this point, Jamie came back to the Windows Optimised Desktop message – he sees Windows’ strength as:

“The ability for any user to connect using any endpoint at any time of day to do their day job successfully but be managed, maintained and secured on the back end.”

[Jamie Burgess: Partner Technology Advisor for Optimised Desktop, Microsoft UK]

OK. That’s fine – but that doesn’t mean I need the same operating system and applications on all devices – just access to my data using common formats and appropriate apps. For example, I don’t need Microsoft Office on a device that is primarily used for content consumption – but I do need an app that can access my Microsoft Office data.  Public, private and hybrid clouds should provide the data access – and platform security measures should allow me to protect that data in transit and at rest.  Windows works (sort of) but it’s hardly optimal.

At this point, I return to Windows Touch – even Microsoft acknowledges the fact that the Windows UI does not work with fat fingers (try touching the close button in the top-right corner of the screen…) and some device manufacturers have had to offer both stylus and touch input (potentially useful) with their own skin on top of Windows. Microsoft won’t tell me what’s coming on Windows 8 but they do have a Windows Product Scout microsite that’s designed to help people find applications for their PC – including “Apps for Slate PCs” on the “featured this week” list. That’s a step towards matching apps with devices but it doesn’t answer the enterprise application store question – for that I think we will have to wait for Windows “vNext”. For 2011 at least, the message is that App-V can be used to deploy an application to Windows PCs and slates alike and to update it centrally (which is fine, if I have the necessary licensing arrangements to obtain App-V).

Hidden costs? And are we really in the post-PC era?

Looking at costs, I’ll start with the device and the only Windows slate I’ve heard pricing for is around £700-800. That’s slightly more than a comparable iPad but with also some features that help secure the device for use with enterprise data (fingerprint reader, TPM chip, solid state encrypted disk, etc.).

Whilst there is undoubtedly a question to answer about supporting non-Microsoft devices too, the benefits of using a Windows slate hinge on it being a viable PC replacement.  I’m not sure that really is the case.

I still need to license the same Windows applications (and security software… and management agents…) that I use in the rest of the enterprise. I’ll admit that most enterprises already have Active Directory and systems management tools that are geared up to supporting a Windows device but I’m not convinced that the TCO is lower (most of my support calls are related to apps running on Windows or in a browser).

An iPad needs a PC (or a Mac!) to sync with via iTunes and the enterprise deployment is a little, how can I put it? Primitive! (in that there are a number of constraints and hoops to jump through.) A BlackBerry Playbook still needs a BlackBerry handset and I’m sure there are constraints with other platforms too. I really don’t believe that the post PC era is here (yet) – for that we’ll need a little more innovation in the mobile device space. For now, that means that slates present additional cost and I’m far more likely to allow a consumer owned and supported device, for certain scenarios with appropriate risk mitigation, than I am to increase my own “desktop” problem.

In conclusion

I still believe that Windows Phone 7, with the addition of suitable enterprise controls for management and maintenance, would be a better slate solution. It’s interesting that, rather than playing a game of chicken and egg as Apple has with Jailbreakers, Microsoft worked with the guys who unlocked their platform, presumably to close the holes and secure the operating system. Allowing Windows Phone to run on a wider range of devices (based on a consistent platform specification, as the current smartphones are) would not present the issues of form factor that Windows Mobile and Android suffer from (too many variations of hardware capability) – in fact the best apps for iOS present themselves differently according to whether they are running on an iPhone or an iPad.

So, is Microsoft dead in the tablet space? Probably not! Do they have a strategy? Quite possibly – what I’ve seen will help them through the period until Windows “vNext” availability, but as they’re not talking about what that platform will offer, it’s difficult to say whether their current strategy makes sense as anything more than a stopgap (although it is certainly intended as an on ramp for Windows “vNext”). It seems to me that the need to protect Windows market share is, yet again, preventing the company from moving forward at the pace it needs to, but the first step to recovery is recognising that there is a problem – and they do at least seem to have taken that on board.

Uncategorized

Google Analytics: Honing in on the visits that count

Every week I create a report that looks at a variety of social media metrics, including visits to the Fujitsu UK and Ireland CTO Blog.  It’s developing over time – I’m also working on a parallel activity with some of my marketing colleagues to create a social media listening dashboard – but my Excel spreadsheet with metrics cobbled together from a variety of sources and measuring against some defined KPIs seems to be doing the trick for now.

One thing that’s been frustrating me is that I know a percentage of our visits are from employees and, frankly, I don’t care about their visits to our blog.  Nor for that matter do I want my own visits (mostly administrative) to show in the stats that I take from Google Analytics.

I knew it should be possible to filter internal users and, earlier this week, I had a major breakthrough.

I created an advanced segment that checked the page (to filter out one blog from the rest of the content on the site) and the source (to filter anyone whose referral source contained certain keywords – for example our company name!).  I then tested the segment and, hey presto – I can see how many results apply to each of the queries and the overall result – now I can concentrate on those visits that really matter.

Google Analytics advanced segment settings to remove internal referrals

Of course, this only relates to referrals, so it doesn’t help me where internal users access the content from an email link (even if I could successfully filter out all the traffic via the company proxy servers, which I haven’t managed so far, some users access the content directly whilst working from home), but it’s a start.

The other change was one I made a few months ago, by defining a number of filters to adjust the reporting:

Unfortunately filters do not apply retrospectively, so it’s worth defining these early in the life of a website.

Uncategorized

Keeping Windows alive with curated computing

Like it or loath it, there’s no denying that the walled garden approach Apple has adopted for application development on iOS (the operating system used for the iPhone, iPad and now new iPods) has been successful. Forrester Research talk about this approach using the term “Curated Computing” – a general term for an environment where there is a gatekeeper controlling the availability of applications for a given platform. So, does this reflect a fundamental shift in the way that we buy applications? I believe it does.

Whilst iOS, Android (Google’s competing mobile operating system) and Windows Phone 7 (the new arrival from Microsoft) have adopted the curated computing approach (albeit with tighter controls over entry to Apple’s AppStore) the majority of the world’s computers are slightly less mobile. And they run Windows. Unfortunately, Windows’ biggest strength (its massive ecosystem of compatible hardware and software) is also its nemesis – a whole load of the applications that run on Windows are, to put it bluntly, a bit crap!

This is a problem for Microsoft. One the one hand, it gives their operating system a bad name (somewhat unfairly, in my opinion, Windows is associated with it’s infamous “Blue Screen of Death” yet we rarely hear about Linux/Mac OS X kernel panics or iOS lockups); but, on the other hand, it’s the same broad device and application support that has made Windows such a success over the last 20 years.

What we’re starting to see is a shift in the way that people approach personal computing. Over the next few years there will be an explosion in the number of mobile devices (smart phones and tablets) used to access corporate infrastructure, along with a general acceptance of bring your own computer (BYOC) schemes – maybe not for all organisations but for a significant number. And that shift gives us the opportunity to tidy things up a bit.

Remove the apps at the left side of the diagram and only the good ones will be left...A few weeks ago, Jon Honeyball was explaining a concept to me and, like many of the concepts that Jon puts forward, it makes perfect sense (and infuriates me that I’d never looked at things this way before). If we think of the quality of software applications, we can consider that, statistically, they follow a normal distribution. That is to say that, the applications on the left of the curve tend towards the software that we don’t want on our systems – from malware through to poorly-coded applications. Meanwhile, on the right of the curve are the better applications, right through to the Microsoft and Adobe applications that are in broad use and generally set a high standard in terms of quality.  The peak on the curve represents the point with the most apps – basically, most application can be described as “okay”. What Microsoft has to do is lose the leftmost 50% of applications from this curve, instantly raising the quality bar for Windows applications. One way to do this is curated computing.

Whilst Apple have been criticised for the lack of transparency in their application approval process (and there are some bad applications available for iOS too), this is basically what they have managed to achieve through their AppStore.

If Microsoft can do the same with Windows Phone 7, and then take that operating system and apply it to other device types (say, a tablet – or even the next version of their PC client operating system) they might well manage to save their share of the personal computing marketplace as we enter the brave new world of user-specific, rather than device-specific computing.

At the moment, the corporate line is that Windows 7 is Microsoft’s client operating system but, even though some Windows 7 tablets can be expected, they miss the mark by some way.

Time after time, we’ve seen Microsoft stick to their message (i.e. that their way is the best and that everyone else is wrong), right up to the point when they announce a new product or feature that seems like a complete U-turn.  That’s why I wouldn’t be too surprised to see them come up with a new approach to tablets in the medium term… one that uses an application store model and a new user interface. One can only live in hope.

Uncategorized

Yikes! My computer can tell websites where I live (thanks to Google)

A few months ago there was a furor as angry Facebook users rallied against the social networking site’s approach to sharing our personal data.  Some people even closed their accounts but at least Facebook’s users choose the information that they post on the site.  OK, so I guess someone else may tag me in an image, but it’s basically up to me to decide whether I want something to be made available – and I can always use fake information if I choose to (I don’t – information like my date of birth, place of birth, and my Mother’s maiden name is all publicly available from government sources, so why bother to hide it?).

Over the last couple of weeks though, I’ve been hearing about Google being able to geolocate a device based on information that their Streetview cars collected.  Not the Wi-Fi traffic that was collected “by mistake” but information collected about Wi-Fi networks in a given neighbourhood used to create a geolocation database.  Now, I don’t really mind that Google has a picture of my house on Streetview… although we were having building work done at the time, so the presence of a builder’s skip on my drive does drag down the impression of my area a little!  What I was shocked to find was that Firefox users can access this database to find out quite a lot about the location of my network (indeed, any browser that supports the Geolocation API can) – in my case it’s only accurate to within about 30-50 metres, but that’s pretty close! I didn’t give consent for Google to collect this – in effect they have been “wardriving” the streets of Britain (and elsewhere).  And if you’re thinking “thats OK, my Wi-Fi is locked down” well, so is mine – I use WPA2 and only allow certain MAC addresses to connect but the very existence of the Wi-Fi access point provides some basic information to clients.

Whilst I’m not entirely happy that Google has collected this information, it’s been done now, and being able to geolocate myself could be handy – particularly as PCs generally don’t have GPS hardware and location-based services will become increasingly prevalent over the coming years.  In addition, Firefox asks for my consent before returning the information required for the database lookup (that’s a requirement of the W3C’s Geolocation API)  and it’s possible to turn geolocation off in Firefox (presumably it’s as simple in other browsers too).

What’s a little worrying is that a malicious website can grab the MAC address of a user’s router, after which it’s just a simple API call to find out where the user is (as demonstrated at the recent Black Hat conference).  The privacy and security implications of this are quite alarming!

One thing’s for sure: Internet privacy is an oxymoron.

%d bloggers like this: