Tag: Cloud computing

  • Why “cloud” represents disruptive innovation – and the changes at HP are just the tip of the iceberg

    Yesterday, I wrote a post about disruptive innovation, based on a book I’d been reading: The Innovator’s Dilemma, by , by Clayton M Christensen.

    In that post, I asked whether cloud computing is sustaining or disruptive – and I said I’d come back and explain my thoughts.

    In some ways, it was a trick question: cloud computing is not a technology; it’s a business model for computing. On that basis, cloud cannot be a sustaining technology. Even so, some of the technologies that are encompassed in providing cloud services are sustaining innovations – for example many of the improvements in datacentre and server technologies.

    If I consider the fact that cloud is creating a new value network, it’s certainly disruptive (and it’s got almost every established IT player running around trying to find a new angle). What’s different about the cloud is that retrenching and moving up-market will only help so much – the incumbents need to switch tracks successfully (or face oblivion).

    Some traditional software companies (e.g. Microsoft) are attempting to move towards the cloud but have struggled to move customers from one-off licensing to a subscription model. Meanwhile, new entrants (e.g. Amazon) have come from nowhere and taken the market for inexpensive infrastructure as a service by storm. As a consequence, the market has defined itself as several strata of infrastructure-, platform- and software- (data- and business process- too) as-a-service. Established IT outsourcers can see the threat that cloud offers, know that they need to be there, and are aggressively restructuring their businesses to achieve the low margins that are required to compete.

    We only have to look at what’s happened at HP recently to see evidence of this need for change. Faced with two quarters of disappointing results, their new CEO had little choice but to make sweeping changes. He announced an exit from the device space and an aquisition of a leading UK software company. Crucially, that company will retain its autonomy, and not just in name (sorry, I couldn’t resist the pun) – allowing Autonomy to manage its own customers and grow within its own value network.

    Only time will tell if HP’s bet on selling a profitable, market-leading, hardware business in order to turn the company around in the face of cloud computing turns out to be a mistake. I can see why they are getting out of the device market – Lenovo may have announced an increase in profits but we should remember Lenovo is IBM’s divested PC division, thriving in its own market, freed from the shackles of its previous owner and its high margin values. Michael Dell may joke about naming HP’s spin-off “Compaq” but Dell needs to watch out too. PCs are not dying, but the market is not growing either. Apple makes more money from tablets and smartphones than from PCs (Macs). What seems strange to me is that HP didn’t find a buyer for its personal systems group before announcing its intended exit.

    [blackbirdpie url=”https://twitter.com/#!/MichaelDell/status/104266609316732928″]

    So, back to the point. Cloud computing is disruptive and established players have a right to be scared. Those providing technology for the cloud have less to worry about (notice that HP is retaining its enterprise servers and storage) but those of us in the managed services business could be in for a rough ride…

  • Cloud adoption “notes from the field”: people, politics and price

    I’ve written about a few of the talks from the recent Unvirtual unconference, including Tim Moreton’s look at big data in the cloud; and Justin McCormack’s ideas for re-architecting cloud applications. I’ve also written previously about a previous Joe Baguley talk on why the consumerisation of IT is nothing to do with iPads.  The last lightning talk that I want to plagiarise (actually, I did ask all of the authors before writing about their talks!) is Simon Gallagher (@vinf_net)’s “notes from the field” talk on his experiences of implementing cloud infrastructure.

    Understanding cloud isn’t about Amazon or Google

    There is a lot happening in the private cloud space and hybrid clouds are a useful model too because not everybody is a web 2.0 start-up with a green-field approach and enterprises still want some of the capabilities offered by cloud technologies.

    Simon suggests that private cloud is really just traditional infrastructure, with added automation/chargeback… for now. He sees technology from the public cloud filtering down to the private space (and hybrid is the reality for the short/medium term for any sizeable organisation with a legacy application set).

    The real cloud wins for the enterprise are self-service, automation and chargeback, not burst/flex models.

    There are three types of people that want cloud… and the one who doesn’t

    Simon’s three types are:

    1. The boss from the Dilbert cartoons who has read a few too many analyst reports (…and is it done yet?)
    2. Smart techies who see cloud as a springboard to the nest stage of their career (something new and interesting)
    3. Those who want to make an inherited mess somebody else’s problem

    There is another type of person who doesn’t welcome cloud computing – people whose jobs become commoditised.  I’ve been there – most of us have – but commodisiation is a fact of life and it’s important to act like the smart guys in the paragraph above, embracing change, learning and developing new skills, rather than viewing the end of the world as nigh.

    Then there are the politics

    The first way to cast doubt on a cloud project is to tell everyone it’s insecure, right?

    But:

    • We trust our WAN provider’s MPLS cloud.
    • We trust our mail malware gateways (e.g. MessageLabs).
    • We trust our managed service provders staff.
    • We trust the third party tape collection services.
    • We trust out VPNs over the Internet.
    • We may already share datacentres with competitors.

    We trust these services because we have technical and audit controls… the same goes for cloud services.

    So, I just buy cloud products and start using them, yeah?

    Cloud infrastructure is not about boxed products.  There is no “one-size fits all” Next, Next, Next, Finish wizard but there are complex issues of people, process, technology, integration and operations.

    It’s about applications, not infrastructure

    Applications will evolve to leverage PaaS models and next-generation cloud architectures. Legacy applications will remain legacy – they can be contained by the cloud but not improved. Simple provisioning needs automation, coding, APIs. Meanwhile, we can allow self-service but it’s important to maintain control (we need to make sure that services are de-provisioned too).

    Amazon is really inexpensive – and you want how much?

    If you think you can build a private cloud (or get someone else to build you a bespoke cloud) for the prices charged by Amazon et al, you’re in for a shock. Off the shelf means economised of scale and, conversely, bespoke does not come cheap. Ultimately, large cloud providers diversify their risks (not everyone is using the service fully at any one time) and somebody is paying.

    Opexification

    There’s a lot of talk about the move from capital expenditure to operating expenditure (OpEx-ification) but accounts don’t like variable costs. And cloud pricing is a rate card – it’s certainly not open book!

    Meanwhile, the sales model is based on purchase of commercial software (enterprises still don’t use open source extensively) and, whilst the public cloud implies ability to flex up/down, private cloud can’t do this (you can’t take your servers back and say “we don’t need them this month”). It’s the same for software with sales teams concentrating on license sales, rather than subscriptions.

    In summary

    Simon wrapped up by highlighting that, whilst the public cloud has its advantages, private and hybrid clouds are big opportunities today.

    Successful implementation relies on:

    • Motivated people
    • A pragmatic approach to politics
    • Understand what you want (and how much you can pay for it)

    Above all, Simon’s conclusion was that your mileage my vary!

  • Can we process “Big Data” in the cloud?

    I wrote last week about one of the presentations I saw at the recent Unvirtual conference and this post highlights another one of the lightning talks – this time on a subject that was truly new to me: Big Data.

    Tim Moreton (@timmoreton), from Acunu, spoke about using big data in the cloud: making it “elastic and sticky” and I’m going to try and get the key points across in this post. Let’s hope I get it right!

    Essentially, “big data” is about collecting, analysing and servicing massive volumes of data.  As the Internet of things becomes a reality, we’ll hear more and more about big data (being generated by all those sensors) but Tim made the point that it often arrives suddenly: all of a sudden you have a lot of users, generating a lot of data.

    Tim explained that key ingredients for managing big data are storage and compute resources but it’s actually about more than that: it’s not just any storage or compute resource because we need high scalability, high performance, and low unit costs.

    Compute needs to be elastic so that we can fire up (virtual) cloud instances at will to provide additional resources for the underlying platform (e.g. Hadoop). Spot pricing, such as that provided by Amazon, allows a maximum price to be set, to process the data at times when there is surplus capacity.

    The trouble with big data and the cloud is virtualisation. Virtualisation is about splitting units of hardware to increase utilisation, with some overhead incurred (generally CPU or IO) – essentially multiple compute resources are combined/consolidated.  Processing big data necessitates combining machines for massive parallelisation – and that doesn’t sit too well with cloud computing: at least I’m not aware of too many non-virtualised elastic clouds!

    Then, there’s the fact that data is decidedly sticky.  It’s fairly simple to change compute providers but how do you pull large data sets out of one cloud and into another? Amazon’s import/export involves shipping disks in the post!

    Tim concluded by saying that there is a balance to be struck.  Cloud computing and big data are not mutually exclusive but it is necessary to account for the costs of storing, processing and moving the data.  His advice was to consider the value (and the lock-in) associated with historical data, to process data close to its source, and to look for solutions that a built to span multiple datacentres.

    [Update: for more information on “Big Data”, see Acunu’s Big Data Insights microsite]

  • IT and the law – is misalignment an inevitable consequence of progress?

    Yesterday evening, I had the pleasure of presenting on behalf of the Office of the CTO to the Society for Computers and Law (SCL)‘s Junior Lawyers Group. It was a slightly unsual presentation in that David [Smith] often speaks to CIOs and business leaders, or to aspiring young people who will become the next generation of IT leaders.  Meanwhile I was given the, somewhat daunting, challenge of pitching a presentation to a room full of practising lawyers – all of whom work in the field of IT law but who had signed up for the event because they wanted to know more about the technology terms that they come across in their legal work.  Because this was the SCL’s Junior Lawyers group, I considered that most of the people in the room have grown up in a world of IT and so finding a level which was neither too technical nor too basic was my biggest issue.

    My approach was to spend some time talking about the way we design solutions: introducing the basic concepts of business, application and technology architectures; talking about the need for clear and stated requirements (particularly non-functionals); the role of service management; and introducing concepts such as cloud computing and virtualisation.

    Part way through, I dumped the PowerPoint (Dilbert fans may be aware of the danger that is “PowerPoint poisoning”) and went back to a flip chart to sketch out a view of why we have redundancy in our servers, networks, datacentres, etc. and to talk about thin clients, virtual desktops and other such terms that may come up in IT conversations.

    Then, back to the deck to talk about where we see things heading in the near future before my slot ended and the event switched to an exercise in prioritising legal terms in an IT context.

    I’m not sure how it went (it will be interesting to see the consolidated feedback from the society) but there was plenty of verbal feedback to suggest the talk was well received, I received some questions (always good to get some audience participation) and from the frantic scribbling on notes at one table I must have been saying something that someone found useful!

    The main reason for this blog post is to highlight some of the additional material in the deck that I didn’t present last night.  There are many places where IT and the law are not as closely aligned as we might expect. Examples include:

    These items could have been a whole presentation in themselves but I’m interested to hear what the readers of this blog think – are these really as significant as I suggest they are? Or is this just an inevitable consequence of  fast-paced business and technology change rubbing up against a legal profession that’s steeped in tradition and takes time to evolve?

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

  • Re-architecting for the cloud and lower costs

    One of the presentations I saw at the recent London Cloud Camp (and again at  Unvirtual) was Justin McCormack (@justinmccormack)’s lightning talk on “re-architecting for the ‘green’ cloud and lower costs” (is re-architecting a word? I don’t think so but re-designing doesn’t mean the same in this context!).

    Justin has published his slides but he’s looking at ways to increase the scalability of our existing cloud applications. One idea is to build out parallel computing systems with many power-efficient CPUs (e.g. ARM chips) but Amdahl’s law kicks in so there is no real performance boost by building out – in fact, the line is almost linear so there is no compelling argument.

    Instead, Justin argues that we currently write cloud applications that use a lot of memory (Facebook is understood to have around 200TB of memory cache). That’s because memory is fast and disk is slow. But with the advent of solid state devices we have something in between (that’s also low-power).

    Instead of writing apps to live in huge RAM caches, we can use less memory, and more flash drives. The model is  not going to be suitable for all applications but it’s fine for “quite big data” – i.e. normal, medium latency applications. A low-power cloud is potentially a low-cost middle ground with huge cost saving potential, if we can write cloud applications accordingly.

    Justin plans to write more on the subject soon – keep up with his thoughts on the  Technology of Content blog.

  • Will commoditisation drive us all to the public cloud (eventually)?

    Tomorrow night, it’s CloudCamp London, which has prompted me to write a post based on one of the presentations from the last event in March.  I already wrote up Joe Baguley’s talk on why the consumerisation of IT is nothing to do with iPads but I also wanted to mention Simon Wardley (from the CSC Leading Edge Forum)’s introduction to CloudCamp.

    As it happens, Simon already wrote a blog post that looks at the topic he covered (private vs. enterprise clouds) and his CloudCamp slides are below:

    • The basic principle is that, eventually, services trend towards utility services/commodities. There are some barriers to overcome along the way but commoditisation will always come.
    • One interesting phenomenon to note is the Jevons Paradox, whereby, as technology progresses and efficiency of resource usage rises, so does the rate of consumption. So, that kills off the theory that the move to cloud will decrease IT budgets!
    • For cloud purists, only a public cloud is really “cloud computing” but Simon talked about a continuum from legacy datacentres to “the cloud”. Hybrid clouds have a place in mitigating transitional risk.
    • Our legacy architectures leave us with a (legacy) problem. First came N+1 resilience but then we got better hardware; then we scaled out and designed for failure (e.g. API calls to rebuild virtual machines) using software and “good enough” components.
    • Using cloud architectures and resilient virtual machines we invented “the enterprise cloud”, sitting somewhere between a traditional datacentre and the public cloud.
    • But we need to achieve greater efficiencies – to do more, faster (even if the overall budget doesn’t increase due to the Jevons Paradox). To drive down the costs of providing each virtual machine (i.e. each unit of scale) we trade disruption and risk against operational efficiency. That drives us towards the public cloud.
    • In summary, Simon suggests that public utility markets are the future, with hybrid environments as a transition strategy. Enterprise clouds should be expected to trend towards niche roles (e.g. to deliver demanding servive level agreements or to meet specific security requirements) whilst increasing portability between clouds makes competing public cloud offerings more attractive.
  • An alternative enterprise desktop

    Earlier this week, I participated in an online event that looked at the use of Linux (specifically Ubuntu) as an alternative to Microsoft Windows on the enterprise desktop.

    It seems that every year is touted as the year of Linux on the desktop – so why hasn’t it happened yet? Or maybe 2011 really is the year of Linux on the desktop and we’ll all be using Google Chrome OS soon. Somehow I don’t think so.

    You see, the trouble with any of the “operating system wars” arguments is that they miss the point entirely. There is a trilogy of people, process and technology at stake – and the operating system is just one small part of one of those elements. It’s the same when people start to compare desktop delivery methods – thick, thin, virtualised, whatever – it’s how you manage the desktop it that counts.

    From an end user perspective, many users don’t really care whether their PC runs Windows, Linux, or whatever-the-next-great-thing-is. What they require (and what the business requires – because salaries are expensive) is a system that is “usable”. Usability is in itself a subjective term, but that generally includes a large degree of familiarity – familiarity with the systems that they use outside work. Just look at the resistance to major user interface changes like the Microsoft Office “ribbon” – now think what happens when you change everything that users know about using a PC. End users also want something that works with everything else they use (i.e. an integrated experience, rather than jumping between disparate systems). And, for those who are motivated by the technology, they don’t want to feel that there is a two tier system whereby some people get a fully-featured desktop experience and others get an old, cascaded PC, with a “light” operating system on it.

    From an IT management standpoint, we want to reduce costs. Not just hardware and software costs but the costs of support (people, process and technology). A “free” desktop operating system is just a very small part of the mix; supporting old hardware gets expensive; and the people costs associated with major infrastructure deployments (whether that’s a virtual desktop or a change of operating system) can be huge. Then there’s application compatibility – probably the most significant headache in any transformation. Yes, there is room for a solution that is “fit for purpose” and that may not be the same solution for everyone – but it does still need to be manageable – and it needs to meet all of the organisation’s requirements from a governance, risk and compliance perspective.

    Even so, the days of allocating a Windows PC to everyone in an effort to standardise every single desktop device are starting to draw to a close. IT consumerisation is bringing new pressures to the enterprise – not just new device classes but also a proliferation of operating system environments. Cloud services (for example consuming software as a service) are a potential enabler – helping to get over the hurdles of application compatibility by boiling everything down to the lowest common denominator (a browser). The cloud is undoubtably here to stay and will certainly evolve but even SaaS is not as simple as it sounds with multiple browser choices, extensions, plug-ins, etc. If seems that, time and time again, it’s the same old legacy applications (generally specified by business IT functions, not corporate IT) that make life difficult and prevent the CIO from achieving the utopia that they seek.

    2011 won’t be the year of Linux on the desktop – but it might just be the year when we stopped worrying about standardisation so much; the year when we accepted that one size might not fit all; and the year when we finally started to think about applications and data, rather than devices and operating systems.

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

  • Azure Connect – the missing link between on-premise and cloud

    Azure Connect offers a way to connect on-premise infrastructure with Windows Azure but it’s lacking functionality that may hinder adoption.

    While Microsoft is one of the most dominant players in client-server computing, until recently, its position in the cloud seemed uncertain.  More recently, we’ve seen Microsoft lay out its stall with both Software as a Service (SaaS) products including Office 365 and Platform as a Service (PaaS) offerings such as Windows Azure joining their traditional portfolio of on-premise products for consumers, small businesses and enterprise customers alike.

    Whereas Amazon’s Elastic Compute Cloud (EC2) and Simple Storage Service (S3) offer virtualised Infrastructure as a Service (IaaS) and Salesforce.com is about consumption of Software as a Service (SaaS), Windows Azure fits somewhere in between. Azure offers compute and storage services, so that an organisation can take an existing application, wrap a service model around it and specify how many instances to run, how to persist data, etc.

    Microsoft also provides middleware to support claims based authentication and an application fabric that allows simplified connectivity between web services endpoints, negotiating firewalls using outbound connections and standard Internet protocols. In addition, there is a relational database component (SQL Azure), which exposes relational database services for cloud consumption, in addition to the standard Azure table storage.

    It all sounds great – but so far everything I’ve discussed runs on a public cloud service and not all applications can be moved in their entirety to the cloud.

    Sometimes makes it makes sense to move compute operations to the cloud and keep the data on-premise (more on that in a moment). Sometimes, it’s appropriate to build a data hub with multiple business partners connecting to a data source in cloud but with applications components in a variety of locations.

    For European CIOs, information security, in particular data residency, is a real issue. I should highlight that I’m not a legal expert, but CIO Magazine recently reported how the Patriot Act potentially gives the United States authorities access to data hosted with US-based service providers – and selecting a European data centre won’t help.  That might make CIOs nervous about placing certain types of data in the cloud although they might consider a hybrid cloud solution.

    Azure already provides federated security, application layer connectivity (via AppFabric) and some options for SQL Azure data synchronisation (currently limited to synchronisation between Microsoft data centres, expanding later this year to include synchronisation with on-premise SQL Server) but the missing component has been the ability to connect Windows Azure with on-premise infrastructure and applications. Windows Azure Connect provides this missing piece of the jigsaw.

    Azure Connect is a new component for Windows Azure that provides secure network communications between compute instances in Azure and servers on premise (ie behind the corporate firewall). Using standard IP protocols (both TCP and UDP) it’s possible to take a web front end to the cloud and leave the SQL Server data on site, communicating over a virtual private network, secured with IPSec. In another scenario, a compute instance can be joined to an on-premise Active Directory  domain so a cloud-based application can take advantage of single sign-on functionality. IT departments can also use Azure Connect for remote administration and troubleshooting of cloud-based computing instances.

    Currently in pre-release form, Microsoft is planning to make Azure Connect available during the first half of 2011. Whilst setup is relatively simple and requires no coding, Azure Connect is reliant on an agent running on the connected infrastructure (ie on each server that connects to Azure resources) in order to establish IPSec connectivity (a future version of Azure Connect will be able to take advantage of other VPN solutions). Once the agent is installed, the server automatically registers itself with the Azure Connect relay in the cloud and network policies are defined to manage connectivity. All that an administrator has to do is to enable Windows Azure roles for external connectivity via the service model; enable local computers to initiate an IPSec connection by installing the Azure Connect agent; define network policies and, in some circumstances, define appropriate outbound firewall rules on servers.

    The emphasis on simplicity is definitely an advantage as many Azure operations seem to require developer knowledge and this is definitely targeted at Windows Administrators. Along with automatic IPSec provisioning (so no need for certificate servers) Azure Connect makes use of DNS so that there is no requirement to change application code (the same server names can be used when roles move between the on premise infrastructure and Azure).

    For some organisations though, the presence of the Azure Connect agent may be seen as a security issue – after all, how many database servers are even Internet-connected? That’s not insurmountable but it’s not the only issue with Azure Connect.

    For example, connected servers need to run Windows Vista, 7, Server 2008, or Server 2008 R2 [a previous version of this story erroneously suggested that only Windows Server 2008 R2 was supported] and many organisations will be running their applications on older operating system releases. This means that there may be server upgrade costs to consider when integrating with the cloud – and it certainly rules out any heterogeneous environments.

    There’s an issue with storage. Windows Azure’s basic compute and storage services can make use of table-based storage. Whilst SQL Azure is available for applications that require a relational database, not all applications have this requirement – and SQL Azure presents additional licensing costs as well as imposing additional architectural complexity.  A significant number of cloud-based applications make use of table storage or combination of table storage and SQL Server – for them, the creation of a hybrid model for customers that rely on on-premise data storage may not be possible.

    For many enterprises, Azure Connect will be a useful tool in moving applications (or parts of applications) to the cloud. If Microsoft can overcome the product’s limitations, it could represent a huge step forward for Microsoft’s cloud services in that it provides a real option for development of hybrid cloud solutions on the Microsoft stack, but there still some way to go.

    [This post was originally written as an article for Cloud Pro.]

  • Why the consumerisation of IT is nothing to do with iPads

    Last week, I wrote a post on the Fujitsu UK and Ireland CTO Blog about the need to adapt and evolve, or face extinction (in an IT context).  IT consumerisation was a key theme of that post and, the next evening, at my first London Cloud Camp, I found myself watching Joe Baguley (EMEA CTO at Quest Software) giving a superb 5 minute presentation on “‘How the public cloud is exciting CEOs and scaring CIOs; IT Consumerisation is here to stay’” – and I’ve taken the liberty (actually, I did ask first) of reciting the key points in this post

    Joe started out by highlighting that, despite what you might read elsewhere (and I have to admit I’ve concentrated a little to heavily on this) the consumerisation of IT is not about iPads, iPhones or other such devices – it’s a lot bigger than that.

    In the “old days” (pre-1995) companies had entities owned called “users” and, from an IT perspective, those users did as they were told to – making use of the hardware and software that the IT department provided. Anything outside this tended to fall foul of the “culture of no” as it was generally either too expensive, or against security.

    Today, things have moved along and those same users are now “consumers”. They have stepped outside the organisation and the IT department is a provider of “stuff”, just like Dropbox, GMail, Facebook, Twitter, Betfair and their bank.

    Dropbox is a great example – it’s tremendously easy to use to share files with other people, especially when compared with a file server or SharePoint site with their various security restrictions, browser complexities and plugins.

    If you’re not convinced about the number of systems we use, think back to the early 1990s, when we each had credentials for just a handful of systems. but now we use password managers to manage our logons (I use LastPass) for systems that may be for work, or not. For many of us, the most useful services that the company provides are email, calendaring, and free printing when we’re in the office!

    So, how does a CIO cope with this?  Soon there will be no more corporate LANs and where does that leave the internal IT department? Sure, we can all cite cloud security issues but, as Joe highlighted in his talk, if Dropbox had a security breach it would be all over Twitter in a few minutes and they would be left with a dead business model so actually it’s the external providers that have the most to lose.

    CIOs have to compete with external providers. Effectively they have a choice: to embrace cloud applications; or to build their own internal services (with the main advantage being that, when they break, you can get people in room and work to get them fixed).

    Ultimately, CIOs just want platforms upon which to build services. And that’s why we need to stop worrying about infrastructure, and work out how we can adopt Platform as a Service (PaaS) models to best suit the needs of our users. Ah yes, users, which brings me back to where I started.

  • Adapt, evolve, innovate – or face extinction

    I’ve written before (some might say too often) about the impact of tablet computers (and smartphones) on enterprise IT. This morning, Andy Mulholland, Global CTO at Capgemini, wrote a blog post that grabbed my attention, when he posited that tablets and smartphones are the disruptive change lever that is required to drive a new business technology wave.

    In the post, he highlighted the incredible increase in smartphone and tablet sales (also the subject of an article in The Economist which looks at how Dell and HP are reinventing themselves in an age of mobile devices, cloud computing and “verticalisation”), that Forrester sees 2011 as the year of the tablet (further driving IT consumerisation), and that this current phase of disruption is not dissimilar to the disruption brought about by the PC in the 1980s.

    Andy then goes on to cite a resistance to user-driven adoption of [devices such as] tablets and XaaS [something-as-a-service] but it seems to me that it’s not CIOs that are blocking either tablets/smartphones or XaaS.

    CIOs may have legitimate concerns about security, business case, or unproven technology – i.e. where is the benefit? And for which end-user roles? – but many CIOs have the imagination to transform the business, they just have other programmes that are taking priority.

    With regards to tablets, I don’t believe it’s the threat to traditional client-server IT that’s the issue, more that the current tranche of tablet devices are not yet suitable to replace PCs. As for XaaS (effectively cloud computing), somewhat ironically, it’s some of the IT service providers who have the most to lose from the shift to the cloud: firstly, there’s the issue of “robbing Peter to pay Paul” – eroding existing markets to participate in this brave new world of cloud computing; secondly it forces a move from a model that provides a guaranteed revenue stream to an on-demand model, one that involves prediction – and uncertainty.

    Ultimately it’s about evolution – as an industry we all have to evolve (and innovate), to avoid becoming irrelevant, especially as other revenue streams trend towards commoditisation.

    Meanwhile, both customers and IT service providers need to work together on innovative approaches that allow us to adapt and use technologies (of which tablets and XaaS are just examples) to disrupt the status quo and drive through business change.

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]