Tag Archives: Cloud computing

Technology

Short takes: Amazon Web Services 101, Adobe Marketing Cloud and Milton Keynes Geek Night (#MKGN)

What a crazy week. On top of a busy work schedule, I’ve also found myself at some tech events that really deserve a full write-up but, for now, will have to make do with a summary…

Amazon Web Services 101

One of the events I attended this week was a “lunch and learn” session to give an introduction/overview of Amazon Web Services – kind of like a breakfast briefing, but at a more sociable hour of the day!

I already blogged about Amazon’s reference architecture for utility computing but I wanted to mention Ryan Shuttleworth’s (@RyanAWS) explaination of how Amazon Web Services (AWS) came about.

Contrary to popular belief, AWS didn’t grow out of spare capacity in the retail business but in building a service-oriented infrastructure for a scalable development environment to initially provide development services to internal teams and then to expose the amazon catalogue as a web service. Over time, Amazon found that developers were hungry for more and they moved towards the AWS mission to:

“Enable business and developers to use web services* to build scalable, sophisticated applications”

*What people now call “the cloud”

In fact, far from being the catalyst for AWS, Amazon’s retail business is just another AWS customer.

Adobe Marketing Cloud

Most people will be familiar with Adobe for their design and print products, whether that’s Photoshop, Lightroom, or a humble PDF reader.  I was invited to attend an event earlier this week to hear about the Adobe Marketing Cloud, which aims to become for marketers what the Creative Suite has for design professionals.  Whilst the use of “cloud” grates with me as a blatant abuse of a buzzword (if I’m generous, I suppose it is a SaaS suite of products…), Adobe has been acquiring companies (I think I heard $3bn mentioned as the total cost) and integrating technology to create a set of analytics, social, advertising, targeting and web experience management solutions and a real-time dashboard.

Milton Keynes Geek Night

MK Geek Night #mkgn

The third event I attended this week was the quarterly Milton Keynes Geek Night (this was the third one) – and this did not disappoint – it was well up to the standard I’ve come to expect from David Hughes (@DavidHughes) and Richard Wiggins (@RichardWiggins).

The evening kicked off with Dave Addey (@DaveAddey) of UK Train Times app fame, talking about what makes a good mobile app. Starting out from a 2010 Sunday Times article about the app gold rush, Dave explained why few people become smartphone app millionaires, but how to see if your idea is:

  • Is your mobile app idea really a good idea? (i.e. is it universal, is it international, and does it have lasting appeal – or, put bluntly, will you sell enough copies to make it worthwhile?)
  • Is it suitable to become a mobile app? (will it fill “dead time”, does it know where you go and use that to add value, is it “always there”, does it have ongoing use)
  • And how should you make it? (cross platform framework, native app, HTML, or hybrid?)

Dave’s talk warrants a blog post of it’s own – and hopefully I’ll return to the subject one day – but, for now, that’s the highlights.

Next up were the 5 minute talks, with Matt Clements (@MattClementsUK) talking about empowering business with APIs to:

  1. Increase sales by driving traffic.
  2. Improve your brand awareness by working with others.
  3. Increase innovation, by allowing others to interface with your platform.
  4. Create partnerships, with symbiotic relationships to develop complimentary products.
  5. Create satisfied customers – by focusing on the part you’re good at, and let others build on it with their expertise.

Then Adam Onishi (@OnishiWeb) gave a personal, and honest, talk about burnout, it’s effects, recognising the problem, and learning to deal with it.

And Jo Lankester (@JoSnow) talked about real-world responsive design and the lessons she has learned:

  1. Improve the process – collaborate from the outset.
  2. Don’t forget who you’re designing for – consider the users, in which context they will use a feature, and how they will use it.
  3. Learn to let go – not everything can be perfect.

Then, there were the usual one-minute slots from sponsors and others with a quick message, before the second keynote – from Aral Balkan (@Aral), talking about the high cost of free.

In an entertaining talk, loaded with sarcasm, profanity (used to good effect) but, most of all, intelligent insight, Aral explained the various business models we follow in the world of consumer technology:

  • Free – with consequential loss of privacy.
  • Paid – with consequential loss of audience (i.e. niche) and user experience.
  • Open – with consequential loss of good user experience, and a propensity to allow OEMs and operators to mess things up.

This was another talk that warrants a blog post of its own (although I’m told the session audio was recorded – so hopefully I’ll be able to put up a link soon) but Aral moved on to talk about a real alternative with mainstream consumer appeal that happens to be open. To achieve this, Aral says we need a revolution in open source culture in that open source and great user experience do not have to be mutually exclusive. We must bring design thinking to open source. Design-led open source.  Without this, Aral says, we don’t have an alternative to Twitter, Facebook, whatever-the-next-big-platform-is doing what they want to with our data. And that alternative needs to be open. Because if it’s just free, the cost is too high.

The next MK Geek Night will be on 21 March, and the date is already in my diary (just waiting for the Eventbrite notice!)

Photo credit: David Hughes, on Flickr. Used with permission.

Technology

[Amazon's] Reference architecture for utility computing

Earlier this week, I attended an Amazon Web Services (AWS) 101 briefing, delivered by Amazon UK’s Ryan Shuttleworth (@RyanAWS).  Although I’ve been watching the “Journey into the AWS cloud” series of webcasts too, it was a really worthwhile session and, when the videos are released to the web, well worth watching for an introduction to the AWS cloud.

One thing I particularly appreciate about Ryan’s presentations is that he approaches things from an architectural view. It’s a refreshing change from the evangelists I’ve met at other companies who generally market software by talking about features (maybe even with some design considerations/best practice or coding snippets) but rarely seem to mention reference architectures or architectural patterns.

During his presentation, Ryan presented a reference architecture for utility computing and, even though this version relates to AWS services, it’s a pretty good model for re-use (in fact, the beauty of such a  reference architecture is that the contents of each box could be swapped out for other components, without affecting the overall approach – maybe I should revisit this post and slot in the Windows Azure components!).

So, what’s in each of these boxes?

  • AWS global infrastructure: consists of regions to collate facilities, with availability zones that are physically separated, and edge locations (e.g. for content distribution).
  • Networking: Amazon provides Direct Connect (dedicated connection to AWS cloud) to integrate with existing assets over VPN Connections and Virtual Private Clouds (your own slice of networking inside EC2), together with Route 53 (a highly available and scalable global DNS service).
  • Compute: Amazon’s Elastic Compute Cloud (EC2) allows for the creation of instances (Linux or Windows) to use as you like, based on a range of instance types, with different pricing – to scale up and down, even auto-scalingElastic Load Balancing  allows the distribution of EC2 workloads across instances in multiple availability zones.
  • Storage: Simple Storage Service (S3) is the main storage service (Dropbox, Spotify and others runs in this) – designed for write once read many applications.  Elastic Block Store (EBS) can be used to provide persistent storage behind an EC2 instance (e.g. boot volume) and supports snapshotting, replicated within an availability zone (so no need to RAID). There’s also Glacier for long term archival of data, AWS Import/Export for bulk uploads/downloads to/from AWS and the AWS Storage Gateway to connect on-premises and cloud-based storage.
  • Databases: Amazon’s Relational Database Service (RDS) provides database as a service capabilities (MySQL, Oracle, or Microsoft SQL Server). There’s also DynamoDB – a provisioned throughput NoSQL database for fast, predictable performance (fully distributed and fault tolerant) and SimpleDB for smaller NoSQL datasets.
  • Application services: Simple Queue Service (SQS) for reliable, scalable, messages queuing for application decoupling); Simple Workflow Service (SWF) to coordinate processing steps across applications and to integrate AWS and non-AWS resources, to manage distributed states in complex systems; CloudSearch – an elastic search engine based on Amazon’s A9 technology to provide auto-scaling and a sophisticated feature set (equivalent to SOLR); CloudFront for a worldwide content delivery network (CDN), to easily distribute content to end users with a single DNS CNAME.
  • Deployment and admin: Elastic Beanstalk allows one click deployment from Eclipse, Visual Studio and Git  for rapid deployment of applications with all AWS resources auto-created; CloudFormation is a scripting framework for AWS resource creation that automates stack creation in a repeatable way. There’s also Identity and Access Management (IAM), software development kits, Simple Email Service (SES), Simple Notification Service (SNS), ElastiCache, Elastic MapReduce, and  the CloudWatch monitoring framework.

I suppose if I were to re-draw Ryan’s reference architecture, I’d include support (AWS Support) as well some payment/billing services (after all, this doesn’t come for free) and the AWS Marketplace to find and start using software applications on the AWS cloud.

One more point: security and compliance (security and service management are not shown as they are effectively layers that run through all of the components in the architecture) – if you implement this model in the cloud, who is responsible? Well, if you contract with Amazon, they are responsible for the AWS global infrastructure and foundation services (compute, storage, database, networking). Everything on top of that (the customisable parts) are up to the customer to secure.  Other providers may take a different approach.

Technology

What-as-a-service?

I’ve written previously about the “cloud stack” of -as-a-service models but I recently saw Microsoft’s Steve Plank (@plankytronixx) give a great description of the differences between on-premise,  infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

Of course, this is a Microsoft view of the cloud computing landscape and I’ve had other discussions recently where people have argued the boundaries for IaaS or PaaS and confused things further by adding traditional web hosting services into the mix*.  Even so, I think the Microsoft description is a good starting point and it lines up well with the major cloud services offerings from competitors like Amazon and Google.

Not everyone will be familiar with this so I thought it was worth repeating Steve’s description here:

In an on-premise deployment, the owning organisation is responsible for (and has control over) the entire technology stack.

With infrastructure as a service, the cloud service provider manages the infrastructure elements: network, storage, servers and virtualisation. The consumer of the IaaS service will typically have some control over the configuration (e.g. creation of virtual networks, creating virtual machines and storage) but they are all managed by the cloud service provider.  The consumer does, however, still need to manage everything from the operating system upwards, including applying patches and other software updates.

Platform as a service includes the infrastructure elements, plus operating system, middleware and runtime elements. Consumers provide an application, configuration and data and the cloud service provider will run it, managing all of the IT operations including the creation and removal of resources. The consumer can determine when to scale the application up or out but is not concerned with how those instances are operated.

Software as a service provides a “full-stack” service, delivering application capabilities to the consumer, who only has to be concerned about their data.

Of course, each approach has its advantages and disadvantages:

  • IaaS allows for rapid migrations, as long as the infrastructure being moved to the cloud doesn’t rely on other components that surround it on-premise (even then, there may be opportunities to provide virtual networks and extend the on-premise infrastructure to the cloud). The downside is that many of the management issues persist as a large part of the stack is still managed by the consumer.
  • PaaS allows developers to concentrate on writing and packaging applications, creating a service model and leaving the underlying components to the cloud services provider. The main disadvantage is that the applications are written for a particular platform, so moving an application “between clouds” may require code modification.
  • SaaS can be advantageous because it allows for on-demand subscription-based application use; however consumers need to be sure that their data is not “locked in” and can be migrated to another service if required later.

Some organisations go further – for example, in the White Book of Cloud Adoption, Fujitsu wrote about Data as a Service (DaaS) and Business Process as a Service (BPaaS) – but IaaS, PaaS and SaaS are the commonly used models.  There are also many other considerations around data residency and other issues but they are outside the scope of this post. Hopefully though, it does go some way towards describing clear distinctions between the various -as-a-service models.

* Incidentally, I’d argue that traditional web hosting is not really a cloud service as the application delivery model is only part of the picture. If a web app is just running on a remote server it’s not really conforming with the broadly accepted NIST definition of cloud computing characteristics. There is a fine line though – and many hosting providers only need to make a few changes to their business model to start offering cloud services. I guess that would be an interesting discussion with the likes of Rackspace…

Technology

Tech.Days Online 2012: Day 1 (#TechDays2012)

For the last couple of years, I’ve been concentrating on IT Strategy but I miss the hands-on technology.  I’ve kind of lost touch with what’s been happening in my former world of Microsoft infrastructure and don’t even get the chance to write about what’s coming up in new releases as the powers that be have decided my little blog is not on their RADAR (to be honest, I always suspected they had me mixed up with another Mark Wilson, who writes at Gizmodo!).

Anyway, I decided to dip into the pool again and see what Microsoft is up to in its latest releases, with two day-long virtual events under the Microsoft Tech.Days Online banner.

Presented by members of the UK evangelist team, Simon May (@simonster), Andrew Fryer (@DeepFat) and Steve Plank (@plankytronixx), day 1 focused on Windows Server and Azure, whilst day 2 will be about Windows 8 and System Center.

So, what did I learn?  Far too much for a single blog post, but here are the highlights from day 1…

Windows Server 2012

Windows Server 2012 looks to be a significant step forward from 2008 R2. The full list of what’s new is extensive but the main focus is on Microsoft’s “next generation” file server, management, virtualisation and networking:

  • “Next generation” file server. Ignore the next generation part – after all, it’s just marketing speak to make a file server sound interesting (some of us remember the early battles between Novell NetWare and Windows NT!) – but there are some significant improvements in Windows Server’s file capabilities.
  • When it comes to management:
    • Windows can be used to manage non-Windows environments and vice versa.  The details were pretty sketchy in yesterday’s event, but apparently Microsoft now understands that we all run heterogeneous environments!
    • Automation continues to be at the heart of the management story, with both DISM and PowerShell.
    • There’s a new version of PowerShell (v3), which promises to be more intuitive as as result of the Integrated Scripting Environment with IntelliSense as well as adding robust sessions that persist across connection dropouts and even reboots, together with simple creation of parallel workflows.  The good news (although you wouldn’t know it from yesterday’s session) is that PowerShell 3 is also available for Windows 7 and Server 2008 (SP2 or later).
    • Remote management is enabled by default.
    • Server Core is still there, but MinShell is another attempt to reduce the attack surface of Windows Server, providing GUI management tools, without a GUI, as described by Mitch Garvis.
  • Virtual machine mobility provides new scenarios for migrating resources around the entreprise:
    • Using shared storage with live migration now supporting VMs on non-clustered hosts (just on an SMB share).
    • By live migrating storage between hosts, moving the virtual disks attached to a running virtual machines from one location to another.
    • With shared-nothing live migration.
    • Using new Hyper-V replica functionality to replicate virtual machines between sites, e.g in a disaster recovery scenario.
    • There’s also a new VHDX format for larger virtual disks, released as an open specification.
  • Enhanced networking:
    • Windows Server now has built-in NIC teaming (load balancing/failover, or LBFO), described by Don Stanwyck in Yegal Edery’s post.
    • Network virtualisation allows the creation of a multi-tenant virtual network environment on top of the existing infrastructure, decoupling network and server configuration.

Windows Server 2012 is already available but an evaluation edition is also available as an ISO or a VHD.

Windows Azure

Windows Azure has been around for a while, but back in my days as an MVP (and when running the Windows Server User Group with Mark Parris), I struggled to get someone at Microsoft to talk about it from an IT Pro perspective (lots of developer stuff, but nothing for the infrastructure guys). That changed when Steve Plank spent an entire afternoon on the topic today.

In summary:

  • Windows Azure has always provided PaaS but it now has IaaS capabilities (although they don’t sound to be as mature as Amazon’s offerings, they might better suit some organisations).
  • When deploying to the cloud, the datacentre or affinity group is selected. Azure services are available in eight datacentres around the world, with 4 in the US, 2 in Europe and 2 in Asia.
  • Applications are deployed to Azure using an XML service model.
  • Virtual machines in Azure differ from the cloud platform services in that they still require management (patching, etc.) at the operating system level.  They may be deployed using a REST API, scripted (e.g. using PowerShell), or created inside a management portal.
  • Virtual hard disks may be uploaded to Azure (they are converted to BLOB storage), or new virtual machines created from a library and it’s possible to capture virtual machines that are not running as images for future deployment.  Virtual machine images may also be copied from the cloud for on-premise deployment.
  • If two virtual machines are connected inside Azure, both are on the  same network, which means they can connect to the same load balancer.
  • Virtual networks may be used to connect on premise networks to Windows Azure, or completely standalone Azure networks can be created (e.g. with their own DNS, Active Directory, etc.)
  • When using a virtual network inside Azure, there is no DHCP but DIPs (dynamic IPs) are provided and the operating system must be configured to use DHCP. Each service has a single IP address to connect to the Internet, with port forwarding used to access multiple hosts.
  • Inside Azure, operating system disks are cached (for performance) but data disks are not (for integrity). Consequently, when installing data-driven operating systems (such as Active Directory), make sure the database is on a data drive.
  • Applications on Azure may be federated with on-premise infrastructure (e.g. Active Directory). Alternatively, a new service is currently in developer preview called the Windows Azure Active Directory. This differs significantly from the normal Active Directory role in Windows Server (which may also be deployed to a virtual machine on Azure) in that: it has a REST API (the Graph API), not an LDAP one; it does not use Kerberos; and it is accessed as an endpoint – i.e. individual instances are not exposed. Windows Azure Active Directory is related to the Office 365 Directory (indeed, logging on to the Windows Azure Active Directory preview shows me my Office 365 details).  Single sign on with Windows Azure Active Directory is described in detail in a post by Vittorio Bertocci.
  • Microsoft provides service level agreements for Azure availability, not for performance. These are based around fault domains and update domains.

A Windows Azure pricing calculator is available, as is a 90-day free trial.

Photograph of Steve Plank taken from the TechNet UK Facebook page.

Technology

Journey through the Amazon Web Services cloud

Working for a large system integrator, I tend to find myself focused on our own offerings and somewhat isolated from what’s going on in the outside world. It’s always good to understand the competitive landscape though and I’ve spent some time recently brushing up my knowledge of Amazon Web Services (AWS), which may come in useful as I’m thinking of moving some of my computing workloads to AWS.  Amazon’s EMEA team are running a series of “Journey to the Cloud” webcasts at the moment and the first two sessions covered:

The next webcast in the series is focused on Storage and Archiving and it takes place next week (23 October). Based on the content of the first two, it should be worth an hour of my time, and maybe yours too?

 

Technology

Personal cloud: call it what you want, ignore it at your peril!

For about 18 months, one of the items on my “to do” list has been to write a paper about something called the “personal cloud”. It’s been slipping due to a number of other priorities but now, partly due to corporate marketing departments abusing the term to make it mean something entirely different, I’ve started to witness some revolt against what some see as yet another attempt at cloudwashing.

On the face of it, critics may have a point – after all, isn’t this just another example of someone making something up and making sure the name includes “cloud”? Well, when you look at what some vendors are doing, dressing up remote access solutions and adding a “cloud” moniker, then yes, personal cloud is nonsensical – but the whole point about a personal cloud is that it is not a one vendor solution – indeed a personal cloud is not even something that you can go out and buy.

I was chatting about this with a colleague, David Gentle (@davegentle), earlier and I think he explains the personal cloud concept really simply. Fundamentally, there are two principles:

  1. The personal cloud is the equivalent of what we might once have called personal productivity – the consumption of office applications, file storage and collaboration tools in a cloud-like manner. It’s more of a B2C concept than B2B but it is, perhaps, the B2C equivalent of an organisation consuming SaaS or IaaS services.
  2. Personal clouds become really important when you work with multiple devices. We’re all fine when we work on one device (e.g. a corporate laptop) but, once we add a smartphone, a tablet, etc. the experience of interacting and sharing between devices has real value. To give an example, Dropbox is a good method for sharing large files but it has a lot more value once it is used across several devices and the value is the user experience, rather than any one device-specific solution.

Personal cloudI expect to see personal cloud rising above the (BYO) mobile device story as a major element of IT consumerisation (see my post from this time last year, based on Joe Baguley’s talk about the consumerisation of IT being nothing to do with iPads) because point solutions (like Dropbox, Microsoft OneNote and SkyDrive, Apple iCloud) are just the tip of the iceberg – the personal cloud has huge implications for IT service delivery. At some point, we will ask why do we need some of the enterprise IT services – what do they actually do for us that a personal cloud providing access to all of our data and services doesn’t? (I seem to recall Joe exclaiming something similar to that corporate IT provides systems for timesheeting, expenses and free printing in the office!)

As for the “personal cloud” name – another colleague, Vin Hughes, did some research for the first reference to the term and he found something remarkable similar (although not called the personal cloud) dating back to 1945 – Vannevar Bush’s “Memex”. If that’s stretching the point a little, how about when the BBC reported in 2002 on Microsoft’s plans for a personal online life archive? So, when was the “personal cloud” term coined? It would seem to be around 2008 – an MIT Technology Review post from December 2007 talks about  how cloud computing services have the potential to alter the digital world (in a consumer context) but it doesn’t use the personal cloud term. One month later, however, a comment on a blog post about SaaS refers to “personal cloud computing”, albeit talking about provisioning personal servers, rather than consuming application and platform services as we do today (all that this represents is a move up the cloud stack as we think less about hardware and operating systems and more about accessing data).  So it seems that the “personal cloud” is not something that was dreamed up particularly recently…

So, why haven’t IT vendors been talking about this? Well, could it be that this is potentially a massive threat (maybe the largest) to many IT vendors’ businesses – the personal cloud is a very big disruptive trend in the enterprise space and, as Dave put it:

@ Personal Cloud. Call it what you want, ignore it at your peril!
@davegentle
davegentle

 

Technology

Raspberry Pi: a case study for using cloud infrastructure?

In common with many thousands of geeks up and down the country, I set my alarm for just before 06:00 today for the big Raspberry Pi “announcement”.

The team at Raspberry Pi had done a great job of keeping the community informed – and I was really impressed that they gave everyone a chance to hear where to buy their miniature computers at the same time. Unfortunately the Raspberry Pi announcement didn’t quite have the intended result as it effectively “slashdotted” the websites for both of the distributors (RS Components and Farnell).

Whilst Raspberry Pi had moved their site to a static page in anticipation, the electronics retailers probably aren’t used to their products being in such demand and both buckled under the load.  Which left me wondering… I know Raspberry Pi’s goal is to support the UK electronics industry (hence the choice of distributors and not simply selling via Amazon.co.uk or similar) but surely this is a case study for how a cloud-based solution could have scaled to cope with demand? Perhaps by redirecting Raspberry Pi purchasers to a site that could scale (e.g. on Amazon Web Services), still fulfilled by RS and Farnell?

Grumpy with RS and Farrell for not taking #raspberrypi traffic warnings seriously. @ must be gutted, but stoked by interest!
@edgillett
Ed Gillett

It didn’t help that the links given were to the main pages (not deep links). I got in during the first 5 minutes at RS and followed the instructions (“Search for Raspberry Pi, and then follow the normal shopping and checkout process.”) only to find that there was a “register your interest” page but no purchase option. A few minutes later, Raspberry Pi said on Twitter that was the wrong page, and I couldn’t find the right one from a site search. Later in the morning, reports on Twitter suggested that RS are not putting Raspberry Pi on sale until the end of the week…

RS *not* sold out of Raspberry Pis - not opening sales until the end of the week. Hope they beef up their servers before then!
@ghalfacree
Gareth Halfacree

[It now seems that doesn't fit with RS Components' Raspberry Pi press release]

With the mainstream news sites and even breakfast TV now running Raspberry Pi stories, the interest will be phenomenal and I’m sure Raspberry Pi can sell many more devices than they can manufacture, but I can’t help feeling they’ve been badly let down by distributors who didn’t take their warnings seriously.

We're so frustrated about the DDOS effect - and apparently some of you are VERY ANGRY. We're really sorry; it's out of our hands.
@Raspberry_Pi
Raspberry Pi

Or, as one open source advocate saw it:

Absolutely fantastic! @ sale brings down #RS & #Farnell. Shows there's massive demand for computers for learning. #opensource

With any luck, my success at registering interest from RS at about 06:03 this morning will have worked. Failing that, I guess I’ll have to wait a little longer…

[Update 07:49 - added link to RS Components press release]

Technology

Is there such a thing as private cloud?

I had an interesting discussion with a colleague today, who was arguing that there is no such thing as private cloud – it’s just virtualisation, rebranded.

Whilst I agree with his sentiment (many organisations claiming to have implemented private clouds have really just virtualised their server estate), I do think that private clouds can exist.

Cloud is a new business model, but the difference between traditional hosting and cloud computing is more that just commercial. The NIST definition of cloud computing is becoming more and more widely accepted and it defines five essential charactistics, three service models and four deployment models.

The essential characteristics are:

  • “On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.”

and NIST’s private cloud definition is:

“Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.”

If anything, the NIST definition is incomplete (it doesn’t recognise any service models beyond infrastructure-, platform- and software-as-a-service – I’d add business process as a service too) but the rest is pretty spot on.

Looking at each of the characteristics and comparing them to a simple virtualisation of existing IT:

  • On demand self service: virtualisation alone doesn’t cover this – so private clouds need to include another technology layer to enable this functionality.
  • Broad network access: nothing controversial there, I think.
  • Resource pooling: I agree, standard virtualisation functionality.
  • Rapid elasticity: this is where private cloud struggles against public (bursting to public via a hybrid solution might help, if feasible from a governance/security perspective) but, with suitable capacity management in place, private virtualised infrastructure deployments can be elastic.
  • Measured service: again, an additional layer of technology is required in order to provide this functionality – more than just a standard virtualised solution.

All of this is possible to achieve internally (i.e. privately), and it’s important to note that it’s no good just porting existing applications to a virtualisaed infrastructure – they need to be re-architected to take advantage of these characteristics. But I’m pretty sure there is more to private cloud than just virtualisation with a new name…

As for, whether there is a long term place for private cloud… that’s an entirely separate question!

Technology

Is technology at the heart of business, or is it simply an enabler?

I saw a video from Cisco this morning, and found it quite inspirational. The fact it’s from Cisco isn’t really relevant (indeed, if I showed it without the last few seconds you woudn’t know) but it’s a great example of how IT is shaping the world that we live in – or, more precisely, how the world is shaping the direction that IT is taking:

In case you can’t see the video above, here are some of the key statistics it contains:

  • Humans created more data in 2009 alone than in all previous years combined.
  • Over the last 15 years, network speeds have increased 18 million times.
  • Information is moving to the cloud; 8/10 IT Managers plan to use cloud computing within the next 3 years.
  • By 2015, tools and automation will eliminate 25% of IT labour hours.
  • We’re using multiple devices: by 2015 there will be nearly one mobile-connected device for every person on earth;
  • 2/3 of employees believe they should be able to access information using company-issued devices at any time, at any location;
  • 60% believe they don’t need to be in an office to be productive;
  • This is creating entirely new forms of collaboration.
  • “The real impact of the information revolution isn’t about information management but on relationships; the ability to allow not dozens, or hundreds, but thousands of people to meaningfully interact” [Dr Michael Schrage, MIT].
  • By 2015 companies will generate 50% of web sales via their social presence and mobile applications.
  • Social business software will become a $5bn business by 2013.
  • Who sits at the centre of all this? Who is managing these exponential shifts? The CIO.

Some impressive numbers here – and we might expect to see many of these figures cited by a company selling social collaboration software and networking equipment but they are a good indication of the way things are heading.  I would place more emphasis on empowered employees and customers redefining IT provisioning (BYO, for example); on everything as a service (XaaS) changing the IT delivery model; on the need for a new architecture to manage the “app Internet”; and on big data – which will be a key theme for the next few years.

Whatever the technologies underpinning the solution – the overall direction is for IT to provide business services that add value and enhance business agility rather than simply being part of “the cost of doing business”.

I think Cisco’s video does a rather good job of illustrating the change that is occurring but the real benefits come when we are able to use technology as an enabler for business services that create new opportunities, rather than responding to existing pressures.

I’d love to hear what our customers, partners and competitors think – is technology at the heart of the digital revolution, or is it simply an enabler for new business services?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was written with assistance from Ian Mitchell.]

Technology

Cloudwashing

Cloud this, cloud that: frankly I’m tired of hearing about “the cloud” and, judging from the debate I’ve had on Twitter this afternoon, I’m not alone.

The trouble is that the term “cloud” has been abused and has become a buzzword (gamification is another – big data could be next…).

I don’t doubt the advantages of cloud computing – far from it – it’s a fantastically powerful business model and it’s incredibly disruptive in our industry. And, like so many disruptive innovations, organisations are faced with a choice – to adopt the disruptive technology or to try and move up the value chain. (Although, in this case, why not both? Adopt the disruptive tech and move up the value chain?)

My problem with cloud marketing is not so much about over-use of the term, it’s about the mis-use of it. And that’s confused the marketplace. There is a pretty good definition of cloud from the American National Institute of Science and Technology (NIST) but it’s missing some key service models (data as a service, business process as a service) so vendors feel the need to define their own “extensions”.

My point is that cloud is about the business model, about how the service is provided, about some of the essential characteristics that provide flexibility in IT operation. That flexibility allows the business to become more responsive to change and, in turn, the CIO may more quickly deliver the services that the CEO asks of them.

It’s natural that business to business (B2B) service providers include cloud as a major theme in their marketing (indeed, in their continued existence as a business).  That’s because delivery of business services and the mechanisms used to ensure that the service is responsive to business needs (on demand self-service, broad network access, resource pooling, rapid elasticity, and measured service) are crucial. Unfortunately, “the cloud” has now crossed the divide into the business to consumer (B2C) space and that’s where it all starts to turn bad.

At the point where “the cloud” is marketed to consumers it is watered down to be meaningless (ignoring the fact that “the cloud” is actually many “clouds”, from multiple providers). So often “the cloud” is really just a service offered via the Internet. Consumers don’t care about “the cloud” – they just want their stuff, when they want it, where they want it, for as little financial outlay as possible. To use an analogy from Joe Baguley, Chief Cloud Technologist, EMEA at VMware – “you don’t market the electricity grid, you market the electricity and the service, not the infra[structure]“.

I’d like to suggest that marketing cloud to consumers is pointless and, ultimately, it’s diluting the real message: that cloud is a way of doing business, not about a particular technology. What do you think?

%d bloggers like this: