Designing for failure does not necessarily mean multi-cloud

Earlier this week, Amazon Web Services’ S3 storage service suffered an outage that affected many websites (including popular sites to check if a website is down for everyone or just you!).

Unsurprisingly, this led to a lot of discussion about designing for failure – or not, it would seem in many cases, including the architecture behind Amazon’s own status pages:

The Amazon and Azure models are slightly different but in the past we’ve seen outages to the Azure identity system (for example) impact on other Microsoft services (Office 365). When that happened, Microsoft’s Office 365 status page didn’t update because of a caching/CDN issue. It seems Amazon didn’t learn from Microsoft’s mistakes!

Randy Bias (@RandyBias) is a former Director at OpenStack and a respected expert on many cloud concepts. Randy and I exchanged many tweets on the topic of the AWS outage but, after multiple replies, I thought a blog post might be more appropriate. You see, I hold the view that not all systems need to be highly available. Sometimes, failure is OK. It all comes down to requirements:

And, as my colleague Tim Siddle highlighted:

I agree. 100%.

So, what does that architecture look like? Well, it will vary according to the provider:

So, if we want to make sure our application can survive a region failure, there are ways to design around this. Just be ready for the solution we sold to the business based on using commodity cloud services to start to look rather expensive. Whereas on-premises we typically have two datacentres with resilient connections, then we’ll want to do the same in the cloud. But, just as not all systems are in all datacentres on-premises, that might also be the case in the cloud. If it’s a service for which some downtime can be tolerated, then we might not need to worry about a multi-region architecture. In cases where we’re not at all concerned about downtime we might not even use an availability set

Other times – i.e. if the application is a web service for which an outage would cause reputational or financial damage – we may have a requirement for higher availability.  That’s where so many of the services impacted by Tuesday’s AWS outage went wrong:

Of course, we might spread resources around regions for other reasons too – like placing them closer to users – but that comes back to my point about requirements. If there’s a requirement for fast, low-latency access then we need to design in the dedicated links (e.g. AWS Direct Connect or Azure ExpressRoute) and we’ll probably have more than one of them too, each terminating in a different region, with load balancers and all sorts of other considerations.

Because a cloud provider could be one of those single points of failure, many people are advocating multi-cloud architectures. But, if you think multi-region is expensive, get ready for some seriously complex architecture and associated costs in a multi-cloud environment. Just as in the on-premises world, many enterprises use a single managed services provider (albeit with multiple datacentres), in the cloud many of us will continue to use a single cloud provider.  Designing for failure does not necessarily mean multi-cloud.

Of course, a single-cloud solution has its risks. Randy is absolutely spot on in his reply below:

It could be argued that one man’s “lock-in” is another’s “making the most of our existing technology investments”. If I have a Microsoft Enterprise Agreement, I want to make sure that I use the software and services that I’m paying for. And running a parallel infrastructure on another cloud is probably not doing that. Not unless I can justify to the CFO why I’m running redundant systems just in case one goes down for a few hours.

That doesn’t mean we can avoid designing with the future in mind. We must always have an exit strategy and, where possible, think about designing systems with a level of abstraction to make them cloud-agnostic.

Ultimately though it all comes back to requirements – and the ability to pay. We might like an Aston Martin but if the budget is more BMW then we’ll need to make some compromises – with an associated risk, signed off by senior management, of course.

[Updated 2 March 2017 16:15 to include the Mark Twomey tweet that I missed out in the original edit]

IT transformation: why timing is crucial

In my work, I regularly find myself discussing transformation with customers who are thinking of moving some or all of their IT services to “the cloud”.  Previously, I’ve talked about a project where a phased approach was taken because of a hard deadline that was driving the whole programme:

  1. Lift and shift to infrastructure-as-a-service (IaaS) and software-as-a-service (SaaS).
  2. Look for service enhancements (transform) – for example re-architect using platform-as-a-service (PaaS).
  3. Iterate/align with sector-wide strategy for the vertical market.

The trouble with this approach is that, once phase 1 is over, the impetus to execute on later phases is less apparent. Organisations change, people move on, priorities shift. And that’s one reason why I now firmly believe that transformation has to happen throughout the project, in parallel with any migration to the cloud – not at the end.

My colleague Colin Hughes (@colinp_hughes) represented this in diagrammatical form in a recent presentation (unfortunately I can’t reproduce it on my personal blog) but it was interesting to listen to episode 6 of Matt Ballantine and Chris Weston’s WB-40 podcast when they were discussing a very similar topic.

In the podcast, Matt and Chris reinforced my view that just moving to the cloud is unlikely to save costs (independently of course – they’re probably not at all bothered about whether I agree or not!). Even if on the surface it appears that there are some savings, the costs may just have been moved elsewhere. Of course, there may be other advantages – like a better service, improved resilience, or other benefits (like reduced technical debt) – but just moving to IaaS is unlikely to be significantly less expensive.

Sure, we can move commodity services (email, etc.) to services like Office 365 but there’s limited advantage to be gained from just moving file servers, web servers, application servers, database servers, etc. from one datacentre to another (virtual) datacentre!

Instead, take the time to think about what applications need; how they could work differently; what would be the impact of using platform services; making use of a microservices-based approach*; could you even go further and re-architect to use so-called “serverless” computing* (e.g. Azure Functions or AWS Lambda)

But perhaps the most important point: digital transformation is not just about the IT – we need to re-design the business processes too if we’re really going to make a difference!

 

* I plan to explore these concepts in more detail in future blog posts.

Not all software consumed remotely is a cloud service

Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Short takes: Amazon Web Services 101, Adobe Marketing Cloud and Milton Keynes Geek Night (#MKGN)

What a crazy week. On top of a busy work schedule, I’ve also found myself at some tech events that really deserve a full write-up but, for now, will have to make do with a summary…

Amazon Web Services 101

One of the events I attended this week was a “lunch and learn” session to give an introduction/overview of Amazon Web Services – kind of like a breakfast briefing, but at a more sociable hour of the day!

I already blogged about Amazon’s reference architecture for utility computing but I wanted to mention Ryan Shuttleworth’s (@RyanAWS) explaination of how Amazon Web Services (AWS) came about.

Contrary to popular belief, AWS didn’t grow out of spare capacity in the retail business but in building a service-oriented infrastructure for a scalable development environment to initially provide development services to internal teams and then to expose the amazon catalogue as a web service. Over time, Amazon found that developers were hungry for more and they moved towards the AWS mission to:

“Enable business and developers to use web services* to build scalable, sophisticated applications”

*What people now call “the cloud”

In fact, far from being the catalyst for AWS, Amazon’s retail business is just another AWS customer.

Adobe Marketing Cloud

Most people will be familiar with Adobe for their design and print products, whether that’s Photoshop, Lightroom, or a humble PDF reader.  I was invited to attend an event earlier this week to hear about the Adobe Marketing Cloud, which aims to become for marketers what the Creative Suite has for design professionals.  Whilst the use of “cloud” grates with me as a blatant abuse of a buzzword (if I’m generous, I suppose it is a SaaS suite of products…), Adobe has been acquiring companies (I think I heard $3bn mentioned as the total cost) and integrating technology to create a set of analytics, social, advertising, targeting and web experience management solutions and a real-time dashboard.

Milton Keynes Geek Night

MK Geek Night #mkgn

The third event I attended this week was the quarterly Milton Keynes Geek Night (this was the third one) – and this did not disappoint – it was well up to the standard I’ve come to expect from David Hughes (@DavidHughes) and Richard Wiggins (@RichardWiggins).

The evening kicked off with Dave Addey (@DaveAddey) of UK Train Times app fame, talking about what makes a good mobile app. Starting out from a 2010 Sunday Times article about the app gold rush, Dave explained why few people become smartphone app millionaires, but how to see if your idea is:

  • Is your mobile app idea really a good idea? (i.e. is it universal, is it international, and does it have lasting appeal – or, put bluntly, will you sell enough copies to make it worthwhile?)
  • Is it suitable to become a mobile app? (will it fill “dead time”, does it know where you go and use that to add value, is it “always there”, does it have ongoing use)
  • And how should you make it? (cross platform framework, native app, HTML, or hybrid?)

Dave’s talk warrants a blog post of it’s own – and hopefully I’ll return to the subject one day – but, for now, that’s the highlights.

Next up were the 5 minute talks, with Matt Clements (@MattClementsUK) talking about empowering business with APIs to:

  1. Increase sales by driving traffic.
  2. Improve your brand awareness by working with others.
  3. Increase innovation, by allowing others to interface with your platform.
  4. Create partnerships, with symbiotic relationships to develop complimentary products.
  5. Create satisfied customers – by focusing on the part you’re good at, and let others build on it with their expertise.

Then Adam Onishi (@OnishiWeb) gave a personal, and honest, talk about burnout, it’s effects, recognising the problem, and learning to deal with it.

And Jo Lankester (@JoSnow) talked about real-world responsive design and the lessons she has learned:

  1. Improve the process – collaborate from the outset.
  2. Don’t forget who you’re designing for – consider the users, in which context they will use a feature, and how they will use it.
  3. Learn to let go – not everything can be perfect.

Then, there were the usual one-minute slots from sponsors and others with a quick message, before the second keynote – from Aral Balkan (@Aral), talking about the high cost of free.

In an entertaining talk, loaded with sarcasm, profanity (used to good effect) but, most of all, intelligent insight, Aral explained the various business models we follow in the world of consumer technology:

  • Free – with consequential loss of privacy.
  • Paid – with consequential loss of audience (i.e. niche) and user experience.
  • Open – with consequential loss of good user experience, and a propensity to allow OEMs and operators to mess things up.

This was another talk that warrants a blog post of its own (although I’m told the session audio was recorded – so hopefully I’ll be able to put up a link soon) but Aral moved on to talk about a real alternative with mainstream consumer appeal that happens to be open. To achieve this, Aral says we need a revolution in open source culture in that open source and great user experience do not have to be mutually exclusive. We must bring design thinking to open source. Design-led open source.  Without this, Aral says, we don’t have an alternative to Twitter, Facebook, whatever-the-next-big-platform-is doing what they want to with our data. And that alternative needs to be open. Because if it’s just free, the cost is too high.

The next MK Geek Night will be on 21 March, and the date is already in my diary (just waiting for the Eventbrite notice!)

Photo credit: David Hughes, on Flickr. Used with permission.

[Amazon’s] Reference architecture for utility computing

Earlier this week, I attended an Amazon Web Services (AWS) 101 briefing, delivered by Amazon UK’s Ryan Shuttleworth (@RyanAWS).  Although I’ve been watching the “Journey into the AWS cloud” series of webcasts too, it was a really worthwhile session and, when the videos are released to the web, well worth watching for an introduction to the AWS cloud.

One thing I particularly appreciate about Ryan’s presentations is that he approaches things from an architectural view. It’s a refreshing change from the evangelists I’ve met at other companies who generally market software by talking about features (maybe even with some design considerations/best practice or coding snippets) but rarely seem to mention reference architectures or architectural patterns.

During his presentation, Ryan presented a reference architecture for utility computing and, even though this version relates to AWS services, it’s a pretty good model for re-use (in fact, the beauty of such a  reference architecture is that the contents of each box could be swapped out for other components, without affecting the overall approach – maybe I should revisit this post and slot in the Windows Azure components!).

So, what’s in each of these boxes?

  • AWS global infrastructure: consists of regions to collate facilities, with availability zones that are physically separated, and edge locations (e.g. for content distribution).
  • Networking: Amazon provides Direct Connect (dedicated connection to AWS cloud) to integrate with existing assets over VPN Connections and Virtual Private Clouds (your own slice of networking inside EC2), together with Route 53 (a highly available and scalable global DNS service).
  • Compute: Amazon’s Elastic Compute Cloud (EC2) allows for the creation of instances (Linux or Windows) to use as you like, based on a range of instance types, with different pricing – to scale up and down, even auto-scalingElastic Load Balancing  allows the distribution of EC2 workloads across instances in multiple availability zones.
  • Storage: Simple Storage Service (S3) is the main storage service (Dropbox, Spotify and others runs in this) – designed for write once read many applications.  Elastic Block Store (EBS) can be used to provide persistent storage behind an EC2 instance (e.g. boot volume) and supports snapshotting, replicated within an availability zone (so no need to RAID). There’s also Glacier for long term archival of data, AWS Import/Export for bulk uploads/downloads to/from AWS and the AWS Storage Gateway to connect on-premises and cloud-based storage.
  • Databases: Amazon’s Relational Database Service (RDS) provides database as a service capabilities (MySQL, Oracle, or Microsoft SQL Server). There’s also DynamoDB – a provisioned throughput NoSQL database for fast, predictable performance (fully distributed and fault tolerant) and SimpleDB for smaller NoSQL datasets.
  • Application services: Simple Queue Service (SQS) for reliable, scalable, messages queuing for application decoupling); Simple Workflow Service (SWF) to coordinate processing steps across applications and to integrate AWS and non-AWS resources, to manage distributed states in complex systems; CloudSearch – an elastic search engine based on Amazon’s A9 technology to provide auto-scaling and a sophisticated feature set (equivalent to SOLR); CloudFront for a worldwide content delivery network (CDN), to easily distribute content to end users with a single DNS CNAME.
  • Deployment and admin: Elastic Beanstalk allows one click deployment from Eclipse, Visual Studio and Git  for rapid deployment of applications with all AWS resources auto-created; CloudFormation is a scripting framework for AWS resource creation that automates stack creation in a repeatable way. There’s also Identity and Access Management (IAM), software development kits, Simple Email Service (SES), Simple Notification Service (SNS), ElastiCache, Elastic MapReduce, and  the CloudWatch monitoring framework.

I suppose if I were to re-draw Ryan’s reference architecture, I’d include support (AWS Support) as well some payment/billing services (after all, this doesn’t come for free) and the AWS Marketplace to find and start using software applications on the AWS cloud.

One more point: security and compliance (security and service management are not shown as they are effectively layers that run through all of the components in the architecture) – if you implement this model in the cloud, who is responsible? Well, if you contract with Amazon, they are responsible for the AWS global infrastructure and foundation services (compute, storage, database, networking). Everything on top of that (the customisable parts) are up to the customer to secure.  Other providers may take a different approach.

What-as-a-service?

I’ve written previously about the “cloud stack” of -as-a-service models but I recently saw Microsoft’s Steve Plank (@plankytronixx) give a great description of the differences between on-premise,  infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

Of course, this is a Microsoft view of the cloud computing landscape and I’ve had other discussions recently where people have argued the boundaries for IaaS or PaaS and confused things further by adding traditional web hosting services into the mix*.  Even so, I think the Microsoft description is a good starting point and it lines up well with the major cloud services offerings from competitors like Amazon and Google.

Not everyone will be familiar with this so I thought it was worth repeating Steve’s description here:

In an on-premise deployment, the owning organisation is responsible for (and has control over) the entire technology stack.

With infrastructure as a service, the cloud service provider manages the infrastructure elements: network, storage, servers and virtualisation. The consumer of the IaaS service will typically have some control over the configuration (e.g. creation of virtual networks, creating virtual machines and storage) but they are all managed by the cloud service provider.  The consumer does, however, still need to manage everything from the operating system upwards, including applying patches and other software updates.

Platform as a service includes the infrastructure elements, plus operating system, middleware and runtime elements. Consumers provide an application, configuration and data and the cloud service provider will run it, managing all of the IT operations including the creation and removal of resources. The consumer can determine when to scale the application up or out but is not concerned with how those instances are operated.

Software as a service provides a “full-stack” service, delivering application capabilities to the consumer, who only has to be concerned about their data.

Of course, each approach has its advantages and disadvantages:

  • IaaS allows for rapid migrations, as long as the infrastructure being moved to the cloud doesn’t rely on other components that surround it on-premise (even then, there may be opportunities to provide virtual networks and extend the on-premise infrastructure to the cloud). The downside is that many of the management issues persist as a large part of the stack is still managed by the consumer.
  • PaaS allows developers to concentrate on writing and packaging applications, creating a service model and leaving the underlying components to the cloud services provider. The main disadvantage is that the applications are written for a particular platform, so moving an application “between clouds” may require code modification.
  • SaaS can be advantageous because it allows for on-demand subscription-based application use; however consumers need to be sure that their data is not “locked in” and can be migrated to another service if required later.

Some organisations go further – for example, in the White Book of Cloud Adoption, Fujitsu wrote about Data as a Service (DaaS) and Business Process as a Service (BPaaS) – but IaaS, PaaS and SaaS are the commonly used models.  There are also many other considerations around data residency and other issues but they are outside the scope of this post. Hopefully though, it does go some way towards describing clear distinctions between the various -as-a-service models.

* Incidentally, I’d argue that traditional web hosting is not really a cloud service as the application delivery model is only part of the picture. If a web app is just running on a remote server it’s not really conforming with the broadly accepted NIST definition of cloud computing characteristics. There is a fine line though – and many hosting providers only need to make a few changes to their business model to start offering cloud services. I guess that would be an interesting discussion with the likes of Rackspace…

Tech.Days Online 2012: Day 1 (#TechDays2012)

For the last couple of years, I’ve been concentrating on IT Strategy but I miss the hands-on technology.  I’ve kind of lost touch with what’s been happening in my former world of Microsoft infrastructure and don’t even get the chance to write about what’s coming up in new releases as the powers that be have decided my little blog is not on their RADAR (to be honest, I always suspected they had me mixed up with another Mark Wilson, who writes at Gizmodo!).

Anyway, I decided to dip into the pool again and see what Microsoft is up to in its latest releases, with two day-long virtual events under the Microsoft Tech.Days Online banner.

Presented by members of the UK evangelist team, Simon May (@simonster), Andrew Fryer (@DeepFat) and Steve Plank (@plankytronixx), day 1 focused on Windows Server and Azure, whilst day 2 will be about Windows 8 and System Center.

So, what did I learn?  Far too much for a single blog post, but here are the highlights from day 1…

Windows Server 2012

Windows Server 2012 looks to be a significant step forward from 2008 R2. The full list of what’s new is extensive but the main focus is on Microsoft’s “next generation” file server, management, virtualisation and networking:

  • “Next generation” file server. Ignore the next generation part – after all, it’s just marketing speak to make a file server sound interesting (some of us remember the early battles between Novell NetWare and Windows NT!) – but there are some significant improvements in Windows Server’s file capabilities.
  • When it comes to management:
    • Windows can be used to manage non-Windows environments and vice versa.  The details were pretty sketchy in yesterday’s event, but apparently Microsoft now understands that we all run heterogeneous environments!
    • Automation continues to be at the heart of the management story, with both DISM and PowerShell.
    • There’s a new version of PowerShell (v3), which promises to be more intuitive as as result of the Integrated Scripting Environment with IntelliSense as well as adding robust sessions that persist across connection dropouts and even reboots, together with simple creation of parallel workflows.  The good news (although you wouldn’t know it from yesterday’s session) is that PowerShell 3 is also available for Windows 7 and Server 2008 (SP2 or later).
    • Remote management is enabled by default.
    • Server Core is still there, but MinShell is another attempt to reduce the attack surface of Windows Server, providing GUI management tools, without a GUI, as described by Mitch Garvis.
  • Virtual machine mobility provides new scenarios for migrating resources around the entreprise:
    • Using shared storage with live migration now supporting VMs on non-clustered hosts (just on an SMB share).
    • By live migrating storage between hosts, moving the virtual disks attached to a running virtual machines from one location to another.
    • With shared-nothing live migration.
    • Using new Hyper-V replica functionality to replicate virtual machines between sites, e.g in a disaster recovery scenario.
    • There’s also a new VHDX format for larger virtual disks, released as an open specification.
  • Enhanced networking:
    • Windows Server now has built-in NIC teaming (load balancing/failover, or LBFO), described by Don Stanwyck in Yegal Edery’s post.
    • Network virtualisation allows the creation of a multi-tenant virtual network environment on top of the existing infrastructure, decoupling network and server configuration.

Windows Server 2012 is already available but an evaluation edition is also available as an ISO or a VHD.

Windows Azure

Windows Azure has been around for a while, but back in my days as an MVP (and when running the Windows Server User Group with Mark Parris), I struggled to get someone at Microsoft to talk about it from an IT Pro perspective (lots of developer stuff, but nothing for the infrastructure guys). That changed when Steve Plank spent an entire afternoon on the topic today.

In summary:

  • Windows Azure has always provided PaaS but it now has IaaS capabilities (although they don’t sound to be as mature as Amazon’s offerings, they might better suit some organisations).
  • When deploying to the cloud, the datacentre or affinity group is selected. Azure services are available in eight datacentres around the world, with 4 in the US, 2 in Europe and 2 in Asia.
  • Applications are deployed to Azure using an XML service model.
  • Virtual machines in Azure differ from the cloud platform services in that they still require management (patching, etc.) at the operating system level.  They may be deployed using a REST API, scripted (e.g. using PowerShell), or created inside a management portal.
  • Virtual hard disks may be uploaded to Azure (they are converted to BLOB storage), or new virtual machines created from a library and it’s possible to capture virtual machines that are not running as images for future deployment.  Virtual machine images may also be copied from the cloud for on-premise deployment.
  • If two virtual machines are connected inside Azure, both are on the  same network, which means they can connect to the same load balancer.
  • Virtual networks may be used to connect on premise networks to Windows Azure, or completely standalone Azure networks can be created (e.g. with their own DNS, Active Directory, etc.)
  • When using a virtual network inside Azure, there is no DHCP but DIPs (dynamic IPs) are provided and the operating system must be configured to use DHCP. Each service has a single IP address to connect to the Internet, with port forwarding used to access multiple hosts.
  • Inside Azure, operating system disks are cached (for performance) but data disks are not (for integrity). Consequently, when installing data-driven operating systems (such as Active Directory), make sure the database is on a data drive.
  • Applications on Azure may be federated with on-premise infrastructure (e.g. Active Directory). Alternatively, a new service is currently in developer preview called the Windows Azure Active Directory. This differs significantly from the normal Active Directory role in Windows Server (which may also be deployed to a virtual machine on Azure) in that: it has a REST API (the Graph API), not an LDAP one; it does not use Kerberos; and it is accessed as an endpoint – i.e. individual instances are not exposed. Windows Azure Active Directory is related to the Office 365 Directory (indeed, logging on to the Windows Azure Active Directory preview shows me my Office 365 details).  Single sign on with Windows Azure Active Directory is described in detail in a post by Vittorio Bertocci.
  • Microsoft provides service level agreements for Azure availability, not for performance. These are based around fault domains and update domains.

A Windows Azure pricing calculator is available, as is a 90-day free trial.

Photograph of Steve Plank taken from the TechNet UK Facebook page.

Journey through the Amazon Web Services cloud

Working for a large system integrator, I tend to find myself focused on our own offerings and somewhat isolated from what’s going on in the outside world. It’s always good to understand the competitive landscape though and I’ve spent some time recently brushing up my knowledge of Amazon Web Services (AWS), which may come in useful as I’m thinking of moving some of my computing workloads to AWS.  Amazon’s EMEA team are running a series of “Journey to the Cloud” webcasts at the moment and the first two sessions covered:

The next webcast in the series is focused on Storage and Archiving and it takes place next week (23 October). Based on the content of the first two, it should be worth an hour of my time, and maybe yours too?

 

Personal cloud: call it what you want, ignore it at your peril!

For about 18 months, one of the items on my “to do” list has been to write a paper about something called the “personal cloud”. It’s been slipping due to a number of other priorities but now, partly due to corporate marketing departments abusing the term to make it mean something entirely different, I’ve started to witness some revolt against what some see as yet another attempt at cloudwashing.

On the face of it, critics may have a point – after all, isn’t this just another example of someone making something up and making sure the name includes “cloud”? Well, when you look at what some vendors are doing, dressing up remote access solutions and adding a “cloud” moniker, then yes, personal cloud is nonsensical – but the whole point about a personal cloud is that it is not a one vendor solution – indeed a personal cloud is not even something that you can go out and buy.

I was chatting about this with a colleague, David Gentle (@davegentle), earlier and I think he explains the personal cloud concept really simply. Fundamentally, there are two principles:

  1. The personal cloud is the equivalent of what we might once have called personal productivity – the consumption of office applications, file storage and collaboration tools in a cloud-like manner. It’s more of a B2C concept than B2B but it is, perhaps, the B2C equivalent of an organisation consuming SaaS or IaaS services.
  2. Personal clouds become really important when you work with multiple devices. We’re all fine when we work on one device (e.g. a corporate laptop) but, once we add a smartphone, a tablet, etc. the experience of interacting and sharing between devices has real value. To give an example, Dropbox is a good method for sharing large files but it has a lot more value once it is used across several devices and the value is the user experience, rather than any one device-specific solution.

Personal cloudI expect to see personal cloud rising above the (BYO) mobile device story as a major element of IT consumerisation (see my post from this time last year, based on Joe Baguley’s talk about the consumerisation of IT being nothing to do with iPads) because point solutions (like Dropbox, Microsoft OneNote and SkyDrive, Apple iCloud) are just the tip of the iceberg – the personal cloud has huge implications for IT service delivery. At some point, we will ask why do we need some of the enterprise IT services – what do they actually do for us that a personal cloud providing access to all of our data and services doesn’t? (I seem to recall Joe exclaiming something similar to that corporate IT provides systems for timesheeting, expenses and free printing in the office!)

As for the “personal cloud” name – another colleague, Vin Hughes, did some research for the first reference to the term and he found something remarkable similar (although not called the personal cloud) dating back to 1945 – Vannevar Bush’s “Memex”. If that’s stretching the point a little, how about when the BBC reported in 2002 on Microsoft’s plans for a personal online life archive? So, when was the “personal cloud” term coined? It would seem to be around 2008 – an MIT Technology Review post from December 2007 talks about  how cloud computing services have the potential to alter the digital world (in a consumer context) but it doesn’t use the personal cloud term. One month later, however, a comment on a blog post about SaaS refers to “personal cloud computing”, albeit talking about provisioning personal servers, rather than consuming application and platform services as we do today (all that this represents is a move up the cloud stack as we think less about hardware and operating systems and more about accessing data).  So it seems that the “personal cloud” is not something that was dreamed up particularly recently…

So, why haven’t IT vendors been talking about this? Well, could it be that this is potentially a massive threat (maybe the largest) to many IT vendors’ businesses – the personal cloud is a very big disruptive trend in the enterprise space and, as Dave put it:

@ Personal Cloud. Call it what you want, ignore it at your peril!
@davegentle
davegentle

 

Raspberry Pi: a case study for using cloud infrastructure?

In common with many thousands of geeks up and down the country, I set my alarm for just before 06:00 today for the big Raspberry Pi “announcement”.

The team at Raspberry Pi had done a great job of keeping the community informed – and I was really impressed that they gave everyone a chance to hear where to buy their miniature computers at the same time. Unfortunately the Raspberry Pi announcement didn’t quite have the intended result as it effectively “slashdotted” the websites for both of the distributors (RS Components and Farnell).

Whilst Raspberry Pi had moved their site to a static page in anticipation, the electronics retailers probably aren’t used to their products being in such demand and both buckled under the load.  Which left me wondering… I know Raspberry Pi’s goal is to support the UK electronics industry (hence the choice of distributors and not simply selling via Amazon.co.uk or similar) but surely this is a case study for how a cloud-based solution could have scaled to cope with demand? Perhaps by redirecting Raspberry Pi purchasers to a site that could scale (e.g. on Amazon Web Services), still fulfilled by RS and Farnell?

Grumpy with RS and Farrell for not taking #raspberrypi traffic warnings seriously. @ must be gutted, but stoked by interest!
@edgillett
Ed Gillett

It didn’t help that the links given were to the main pages (not deep links). I got in during the first 5 minutes at RS and followed the instructions (“Search for Raspberry Pi, and then follow the normal shopping and checkout process.”) only to find that there was a “register your interest” page but no purchase option. A few minutes later, Raspberry Pi said on Twitter that was the wrong page, and I couldn’t find the right one from a site search. Later in the morning, reports on Twitter suggested that RS are not putting Raspberry Pi on sale until the end of the week…

RS *not* sold out of Raspberry Pis - not opening sales until the end of the week. Hope they beef up their servers before then!
@ghalfacree
Gareth Halfacree

[It now seems that doesn’t fit with RS Components’ Raspberry Pi press release]

With the mainstream news sites and even breakfast TV now running Raspberry Pi stories, the interest will be phenomenal and I’m sure Raspberry Pi can sell many more devices than they can manufacture, but I can’t help feeling they’ve been badly let down by distributors who didn’t take their warnings seriously.

We're so frustrated about the DDOS effect - and apparently some of you are VERY ANGRY. We're really sorry; it's out of our hands.
@Raspberry_Pi
Raspberry Pi

Or, as one open source advocate saw it:

Absolutely fantastic! @ sale brings down #RS & #Farnell. Shows there's massive demand for computers for learning. #opensource

With any luck, my success at registering interest from RS at about 06:03 this morning will have worked. Failing that, I guess I’ll have to wait a little longer…

[Update 07:49 – added link to RS Components press release]