Is there such a thing as private cloud?

I had an interesting discussion with a colleague today, who was arguing that there is no such thing as private cloud – it’s just virtualisation, rebranded.

Whilst I agree with his sentiment (many organisations claiming to have implemented private clouds have really just virtualised their server estate), I do think that private clouds can exist.

Cloud is a new business model, but the difference between traditional hosting and cloud computing is more that just commercial. The NIST definition of cloud computing is becoming more and more widely accepted and it defines five essential charactistics, three service models and four deployment models.

The essential characteristics are:

  • “On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.”

and NIST’s private cloud definition is:

“Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.”

If anything, the NIST definition is incomplete (it doesn’t recognise any service models beyond infrastructure-, platform- and software-as-a-service – I’d add business process as a service too) but the rest is pretty spot on.

Looking at each of the characteristics and comparing them to a simple virtualisation of existing IT:

  • On demand self service: virtualisation alone doesn’t cover this – so private clouds need to include another technology layer to enable this functionality.
  • Broad network access: nothing controversial there, I think.
  • Resource pooling: I agree, standard virtualisation functionality.
  • Rapid elasticity: this is where private cloud struggles against public (bursting to public via a hybrid solution might help, if feasible from a governance/security perspective) but, with suitable capacity management in place, private virtualised infrastructure deployments can be elastic.
  • Measured service: again, an additional layer of technology is required in order to provide this functionality – more than just a standard virtualised solution.

All of this is possible to achieve internally (i.e. privately), and it’s important to note that it’s no good just porting existing applications to a virtualisaed infrastructure – they need to be re-architected to take advantage of these characteristics. But I’m pretty sure there is more to private cloud than just virtualisation with a new name…

As for, whether there is a long term place for private cloud… that’s an entirely separate question!

Is technology at the heart of business, or is it simply an enabler?

I saw a video from Cisco this morning, and found it quite inspirational. The fact it’s from Cisco isn’t really relevant (indeed, if I showed it without the last few seconds you woudn’t know) but it’s a great example of how IT is shaping the world that we live in – or, more precisely, how the world is shaping the direction that IT is taking:

In case you can’t see the video above, here are some of the key statistics it contains:

  • Humans created more data in 2009 alone than in all previous years combined.
  • Over the last 15 years, network speeds have increased 18 million times.
  • Information is moving to the cloud; 8/10 IT Managers plan to use cloud computing within the next 3 years.
  • By 2015, tools and automation will eliminate 25% of IT labour hours.
  • We’re using multiple devices: by 2015 there will be nearly one mobile-connected device for every person on earth;
  • 2/3 of employees believe they should be able to access information using company-issued devices at any time, at any location;
  • 60% believe they don’t need to be in an office to be productive;
  • This is creating entirely new forms of collaboration.
  • “The real impact of the information revolution isn’t about information management but on relationships; the ability to allow not dozens, or hundreds, but thousands of people to meaningfully interact” [Dr Michael Schrage, MIT].
  • By 2015 companies will generate 50% of web sales via their social presence and mobile applications.
  • Social business software will become a $5bn business by 2013.
  • Who sits at the centre of all this? Who is managing these exponential shifts? The CIO.

Some impressive numbers here – and we might expect to see many of these figures cited by a company selling social collaboration software and networking equipment but they are a good indication of the way things are heading.  I would place more emphasis on empowered employees and customers redefining IT provisioning (BYO, for example); on everything as a service (XaaS) changing the IT delivery model; on the need for a new architecture to manage the “app Internet”; and on big data – which will be a key theme for the next few years.

Whatever the technologies underpinning the solution – the overall direction is for IT to provide business services that add value and enhance business agility rather than simply being part of “the cost of doing business”.

I think Cisco’s video does a rather good job of illustrating the change that is occurring but the real benefits come when we are able to use technology as an enabler for business services that create new opportunities, rather than responding to existing pressures.

I’d love to hear what our customers, partners and competitors think – is technology at the heart of the digital revolution, or is it simply an enabler for new business services?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was written with assistance from Ian Mitchell.]


Cloud this, cloud that: frankly I’m tired of hearing about “the cloud” and, judging from the debate I’ve had on Twitter this afternoon, I’m not alone.

The trouble is that the term “cloud” has been abused and has become a buzzword (gamification is another – big data could be next…).

I don’t doubt the advantages of cloud computing – far from it – it’s a fantastically powerful business model and it’s incredibly disruptive in our industry. And, like so many disruptive innovations, organisations are faced with a choice – to adopt the disruptive technology or to try and move up the value chain. (Although, in this case, why not both? Adopt the disruptive tech and move up the value chain?)

My problem with cloud marketing is not so much about over-use of the term, it’s about the mis-use of it. And that’s confused the marketplace. There is a pretty good definition of cloud from the American National Institute of Science and Technology (NIST) but it’s missing some key service models (data as a service, business process as a service) so vendors feel the need to define their own “extensions”.

My point is that cloud is about the business model, about how the service is provided, about some of the essential characteristics that provide flexibility in IT operation. That flexibility allows the business to become more responsive to change and, in turn, the CIO may more quickly deliver the services that the CEO asks of them.

It’s natural that business to business (B2B) service providers include cloud as a major theme in their marketing (indeed, in their continued existence as a business).  That’s because delivery of business services and the mechanisms used to ensure that the service is responsive to business needs (on demand self-service, broad network access, resource pooling, rapid elasticity, and measured service) are crucial. Unfortunately, “the cloud” has now crossed the divide into the business to consumer (B2C) space and that’s where it all starts to turn bad.

At the point where “the cloud” is marketed to consumers it is watered down to be meaningless (ignoring the fact that “the cloud” is actually many “clouds”, from multiple providers). So often “the cloud” is really just a service offered via the Internet. Consumers don’t care about “the cloud” – they just want their stuff, when they want it, where they want it, for as little financial outlay as possible. To use an analogy from Joe Baguley, Chief Cloud Technologist, EMEA at VMware – “you don’t market the electricity grid, you market the electricity and the service, not the infra[structure]”.

I’d like to suggest that marketing cloud to consumers is pointless and, ultimately, it’s diluting the real message: that cloud is a way of doing business, not about a particular technology. What do you think?

As Amazon fuels the fire, where are the networks to deliver our content?

Last week saw Amazon’s announcement of the Kindle Fire – a new tablet computer which marks the bookstore-turned-online-warehouse-turned-cloud-infrastructure-provider‘s latest skirmish into the world of content provision and consumption. It’s not going to be available in the UK and Ireland for a while (some of the supporting services don’t yet exist here) but many technology commentators have drawn comparisons with the Apple iPad – the current tablet market leader (by a country mile). And, whilst there are comparisons (both are tablets, both rely on content) – they really do compete in different sectors.

Even as an iPad user, I can see the attractiveness of the Kindle Fire: If Amazon is able to execute its strategy (and all signs suggest they are), then they can segment the tablet market leaving Apple at the premium end and shifting huge volumes of low price devices to non-geeks. Note how Amazon has maintained a range of lower-price eInk devices too? That’s all about preserving and growing the existing user base – people who like reading, like the convenience of eBooks but who are not driven by technology.

At this point you’re probably starting to wonder why I’m writing this on a blog from a provider of enterprise IT systems and services. How is this really relevant to the enterprise? Actually, I think it is really relevant. I’ve written about the consumerisation of enterprise IT over and over (on this blog and elsewhere) but, all of a sudden, we’re not talking about a £650 iPad purchase (a major commitment for most people) but a sub-£200 tablet (assuming the Fire makes it to the UK). And that could well mark a tipping point where Android, previously largely confined to smartphones, is used to access other enterprise services.

I can think of at least one former CIO for whom that is a concern: the variety of Android platforms and the potential for malware is a significant security risk. But we can’t stick our heads in the sand, or ban Android devices – we need to find a way forward that is truly device and operating system agnostic (and I think that’s best saved as a topic for another blog post).

What the Apple iPad, Amazon Kindle Fire, and plethora of Google Android-powered devices have in common is their reliance on content. Apple and Amazon both have a content-driven strategy (Google is working on one) but how does that content reach us? Over the Internet.

And there stands a problem… outside major cities, broadband provision is still best described as patchy. There are efforts to improve this (including, but not exclusively, those which Fujitsu is taking part in) but 3G and  4G mobile networks are a part of the picture.

UK businesses and consumers won’t be able to fully benefit from new cloud-based tools until the UK has a nationwide reliable high speed mobile data network and a new paper, published today by the Open Digital Policy Organisation suggests that the UK is at least 2 years behind major countries in its 4G rollout plans. Aside from the potential cost to businesses of £732m a year,  we’re all consumers, downloading content to our Kindles, iPads, watching TV catch-up services like BBC iPlayer and 4oD, as well as video content from YouTube, Vimeo, etc. Add to that the networks of sensors that will drive the future Internet – and then consider that many businesses are starting to question the need for expensive wide area network connections when inexpensive public options are available… I think you get my point…

We live in a content-driven society – more and more so by the day… sadly it seems that the “information superhighway” is suffering from congestion and that may well stifle our progress.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

The future Internet and the Intelligent Society

Last week, I spent an evening with the British Computer Society’s Internet Specialist Group, where I’d been asked to present on where I see the Internet developing in future – an always-on, connected vision of joined-up services to deliver greater benefit across society.

I started out with a brief retrospective of the last 42 years of Internet development and at look at the way we use the Internet today, before I introduced the concept of human-centric computing and, in particular, citizen-centric computing as featured in Rebecca MacKinnon’s TED talk about the need to take back the Internet. This shows how we need any future Internet to evolve in a citizen-centric manner, building a world where government and technology serve people and leads nicely into some of the concepts introduced in the Technology Strategy Board‘s Future Internet Report.

After highlighting out the explosion in the volumes of data and the number of connected devices, I outlined the major enabling components for the future Internet – far more than “bigger pipes” – although we do need a capable access mechanism, infrastructure for the personalisation of cloud services and for machine to machine (M2M) transactions; and finally, for convergence that delivers a transformational change in both public and private service delivery.

Our vision is The Intelligent Society; bringing physical and virtual worlds into harmony to deliver greater benefit across society. As consumerisation takes hold, technology is becoming more accessible, even commoditised in places, for on delivery of on-demand, stateless services. Right now we have a “perfect storm” where a number of technologies are maturing and falling into alignment to deliver our vision.

These technologies break down into: the devices (typically mobile) and sensors (for M2M communications); the networks that join devices to services; and the digital utilities that provide on demand computing and software resources for next-generation digital services. And digital utilities are more than just “more cloud” too – we need to consider interconnectivity between clouds, security provision and the compute power required to process big data to provide analytics and smart responses.

There’s more detail in the speaker notes on the deck (and I should probably write some more blog posts on the subject) but I finished up with a look at Technology Perspectives – a resource we’ve created to give a background context for strategic planning.

As we develop “the Internet of the future” we have an opportunity to deliver benefit, not just in terms of specific business problems, but on a wide scale that benefits entire populations. Furthermore, we’ve seen that changing principles and mindsets are creating the right conditions for these solutions to be incubated and developed alongside maturing technologies that enabling this vision and making it a reality.

This isn’t sci-fi, this is within our reach. And it’s very exciting.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Why “cloud” represents disruptive innovation – and the changes at HP are just the tip of the iceberg

Yesterday, I wrote a post about disruptive innovation, based on a book I’d been reading: The Innovator’s Dilemma, by , by Clayton M Christensen.

In that post, I asked whether cloud computing is sustaining or disruptive – and I said I’d come back and explain my thoughts.

In some ways, it was a trick question: cloud computing is not a technology; it’s a business model for computing. On that basis, cloud cannot be a sustaining technology. Even so, some of the technologies that are encompassed in providing cloud services are sustaining innovations – for example many of the improvements in datacentre and server technologies.

If I consider the fact that cloud is creating a new value network, it’s certainly disruptive (and it’s got almost every established IT player running around trying to find a new angle). What’s different about the cloud is that retrenching and moving up-market will only help so much – the incumbents need to switch tracks successfully (or face oblivion).

Some traditional software companies (e.g. Microsoft) are attempting to move towards the cloud but have struggled to move customers from one-off licensing to a subscription model. Meanwhile, new entrants (e.g. Amazon) have come from nowhere and taken the market for inexpensive infrastructure as a service by storm. As a consequence, the market has defined itself as several strata of infrastructure-, platform- and software- (data- and business process- too) as-a-service. Established IT outsourcers can see the threat that cloud offers, know that they need to be there, and are aggressively restructuring their businesses to achieve the low margins that are required to compete.

We only have to look at what’s happened at HP recently to see evidence of this need for change. Faced with two quarters of disappointing results, their new CEO had little choice but to make sweeping changes. He announced an exit from the device space and an aquisition of a leading UK software company. Crucially, that company will retain its autonomy, and not just in name (sorry, I couldn’t resist the pun) – allowing Autonomy to manage its own customers and grow within its own value network.

Only time will tell if HP’s bet on selling a profitable, market-leading, hardware business in order to turn the company around in the face of cloud computing turns out to be a mistake. I can see why they are getting out of the device market – Lenovo may have announced an increase in profits but we should remember Lenovo is IBM’s divested PC division, thriving in its own market, freed from the shackles of its previous owner and its high margin values. Michael Dell may joke about naming HP’s spin-off “Compaq” but Dell needs to watch out too. PCs are not dying, but the market is not growing either. Apple makes more money from tablets and smartphones than from PCs (Macs). What seems strange to me is that HP didn’t find a buyer for its personal systems group before announcing its intended exit.

If HP spins off their PC business....maybe they will call it Compaq?
Michael Dell

So, back to the point. Cloud computing is disruptive and established players have a right to be scared. Those providing technology for the cloud have less to worry about (notice that HP is retaining its enterprise servers and storage) but those of us in the managed services business could be in for a rough ride…

Cloud adoption “notes from the field”: people, politics and price

I’ve written about a few of the talks from the recent Unvirtual unconference, including Tim Moreton’s look at big data in the cloud; and Justin McCormack’s ideas for re-architecting cloud applications. I’ve also written previously about a previous Joe Baguley talk on why the consumerisation of IT is nothing to do with iPads.  The last lightning talk that I want to plagiarise (actually, I did ask all of the authors before writing about their talks!) is Simon Gallagher (@vinf_net)’s “notes from the field” talk on his experiences of implementing cloud infrastructure.

Understanding cloud isn’t about Amazon or Google

There is a lot happening in the private cloud space and hybrid clouds are a useful model too because not everybody is a web 2.0 start-up with a green-field approach and enterprises still want some of the capabilities offered by cloud technologies.

Simon suggests that private cloud is really just traditional infrastructure, with added automation/chargeback… for now. He sees technology from the public cloud filtering down to the private space (and hybrid is the reality for the short/medium term for any sizeable organisation with a legacy application set).

The real cloud wins for the enterprise are self-service, automation and chargeback, not burst/flex models.

There are three types of people that want cloud… and the one who doesn’t

Simon’s three types are:

  1. The boss from the Dilbert cartoons who has read a few too many analyst reports (…and is it done yet?)
  2. Smart techies who see cloud as a springboard to the nest stage of their career (something new and interesting)
  3. Those who want to make an inherited mess somebody else’s problem

There is another type of person who doesn’t welcome cloud computing – people whose jobs become commoditised.  I’ve been there – most of us have – but commodisiation is a fact of life and it’s important to act like the smart guys in the paragraph above, embracing change, learning and developing new skills, rather than viewing the end of the world as nigh.

Then there are the politics

The first way to cast doubt on a cloud project is to tell everyone it’s insecure, right?


  • We trust our WAN provider’s MPLS cloud.
  • We trust our mail malware gateways (e.g. MessageLabs).
  • We trust our managed service provders staff.
  • We trust the third party tape collection services.
  • We trust out VPNs over the Internet.
  • We may already share datacentres with competitors.

We trust these services because we have technical and audit controls… the same goes for cloud services.

So, I just buy cloud products and start using them, yeah?

Cloud infrastructure is not about boxed products.  There is no “one-size fits all” Next, Next, Next, Finish wizard but there are complex issues of people, process, technology, integration and operations.

It’s about applications, not infrastructure

Applications will evolve to leverage PaaS models and next-generation cloud architectures. Legacy applications will remain legacy – they can be contained by the cloud but not improved. Simple provisioning needs automation, coding, APIs. Meanwhile, we can allow self-service but it’s important to maintain control (we need to make sure that services are de-provisioned too).

Amazon is really inexpensive – and you want how much?

If you think you can build a private cloud (or get someone else to build you a bespoke cloud) for the prices charged by Amazon et al, you’re in for a shock. Off the shelf means economised of scale and, conversely, bespoke does not come cheap. Ultimately, large cloud providers diversify their risks (not everyone is using the service fully at any one time) and somebody is paying.


There’s a lot of talk about the move from capital expenditure to operating expenditure (OpEx-ification) but accounts don’t like variable costs. And cloud pricing is a rate card – it’s certainly not open book!

Meanwhile, the sales model is based on purchase of commercial software (enterprises still don’t use open source extensively) and, whilst the public cloud implies ability to flex up/down, private cloud can’t do this (you can’t take your servers back and say “we don’t need them this month”). It’s the same for software with sales teams concentrating on license sales, rather than subscriptions.

In summary

Simon wrapped up by highlighting that, whilst the public cloud has its advantages, private and hybrid clouds are big opportunities today.

Successful implementation relies on:

  • Motivated people
  • A pragmatic approach to politics
  • Understand what you want (and how much you can pay for it)

Above all, Simon’s conclusion was that your mileage my vary!

Can we process “Big Data” in the cloud?

I wrote last week about one of the presentations I saw at the recent Unvirtual conference and this post highlights another one of the lightning talks – this time on a subject that was truly new to me: Big Data.

Tim Moreton (@timmoreton), from Acunu, spoke about using big data in the cloud: making it “elastic and sticky” and I’m going to try and get the key points across in this post. Let’s hope I get it right!

Essentially, “big data” is about collecting, analysing and servicing massive volumes of data.  As the Internet of things becomes a reality, we’ll hear more and more about big data (being generated by all those sensors) but Tim made the point that it often arrives suddenly: all of a sudden you have a lot of users, generating a lot of data.

Tim explained that key ingredients for managing big data are storage and compute resources but it’s actually about more than that: it’s not just any storage or compute resource because we need high scalability, high performance, and low unit costs.

Compute needs to be elastic so that we can fire up (virtual) cloud instances at will to provide additional resources for the underlying platform (e.g. Hadoop). Spot pricing, such as that provided by Amazon, allows a maximum price to be set, to process the data at times when there is surplus capacity.

The trouble with big data and the cloud is virtualisation. Virtualisation is about splitting units of hardware to increase utilisation, with some overhead incurred (generally CPU or IO) – essentially multiple compute resources are combined/consolidated.  Processing big data necessitates combining machines for massive parallelisation – and that doesn’t sit too well with cloud computing: at least I’m not aware of too many non-virtualised elastic clouds!

Then, there’s the fact that data is decidedly sticky.  It’s fairly simple to change compute providers but how do you pull large data sets out of one cloud and into another? Amazon’s import/export involves shipping disks in the post!

Tim concluded by saying that there is a balance to be struck.  Cloud computing and big data are not mutually exclusive but it is necessary to account for the costs of storing, processing and moving the data.  His advice was to consider the value (and the lock-in) associated with historical data, to process data close to its source, and to look for solutions that a built to span multiple datacentres.

[Update: for more information on “Big Data”, see Acunu’s Big Data Insights microsite]

IT and the law – is misalignment an inevitable consequence of progress?

Yesterday evening, I had the pleasure of presenting on behalf of the Office of the CTO to the Society for Computers and Law (SCL)‘s Junior Lawyers Group. It was a slightly unsual presentation in that David [Smith] often speaks to CIOs and business leaders, or to aspiring young people who will become the next generation of IT leaders.  Meanwhile I was given the, somewhat daunting, challenge of pitching a presentation to a room full of practising lawyers – all of whom work in the field of IT law but who had signed up for the event because they wanted to know more about the technology terms that they come across in their legal work.  Because this was the SCL’s Junior Lawyers group, I considered that most of the people in the room have grown up in a world of IT and so finding a level which was neither too technical nor too basic was my biggest issue.

My approach was to spend some time talking about the way we design solutions: introducing the basic concepts of business, application and technology architectures; talking about the need for clear and stated requirements (particularly non-functionals); the role of service management; and introducing concepts such as cloud computing and virtualisation.

Part way through, I dumped the PowerPoint (Dilbert fans may be aware of the danger that is “PowerPoint poisoning”) and went back to a flip chart to sketch out a view of why we have redundancy in our servers, networks, datacentres, etc. and to talk about thin clients, virtual desktops and other such terms that may come up in IT conversations.

Then, back to the deck to talk about where we see things heading in the near future before my slot ended and the event switched to an exercise in prioritising legal terms in an IT context.

I’m not sure how it went (it will be interesting to see the consolidated feedback from the society) but there was plenty of verbal feedback to suggest the talk was well received, I received some questions (always good to get some audience participation) and from the frantic scribbling on notes at one table I must have been saying something that someone found useful!

The main reason for this blog post is to highlight some of the additional material in the deck that I didn’t present last night.  There are many places where IT and the law are not as closely aligned as we might expect. Examples include:

These items could have been a whole presentation in themselves but I’m interested to hear what the readers of this blog think – are these really as significant as I suggest they are? Or is this just an inevitable consequence of  fast-paced business and technology change rubbing up against a legal profession that’s steeped in tradition and takes time to evolve?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Re-architecting for the cloud and lower costs

One of the presentations I saw at the recent London Cloud Camp (and again at  Unvirtual) was Justin McCormack (@justinmccormack)’s lightning talk on “re-architecting for the ‘green’ cloud and lower costs” (is re-architecting a word? I don’t think so but re-designing doesn’t mean the same in this context!).

Justin has published his slides but he’s looking at ways to increase the scalability of our existing cloud applications. One idea is to build out parallel computing systems with many power-efficient CPUs (e.g. ARM chips) but Amdahl’s law kicks in so there is no real performance boost by building out – in fact, the line is almost linear so there is no compelling argument.

Instead, Justin argues that we currently write cloud applications that use a lot of memory (Facebook is understood to have around 200TB of memory cache). That’s because memory is fast and disk is slow. But with the advent of solid state devices we have something in between (that’s also low-power).

Instead of writing apps to live in huge RAM caches, we can use less memory, and more flash drives. The model is  not going to be suitable for all applications but it’s fine for “quite big data” – i.e. normal, medium latency applications. A low-power cloud is potentially a low-cost middle ground with huge cost saving potential, if we can write cloud applications accordingly.

Justin plans to write more on the subject soon – keep up with his thoughts on the  Technology of Content blog.