Waffle and randomness

Miscellaneous painting and decorating tips

This week, I’m between jobs (technically, I’m on holiday from Fujitsu, but I’ve already worked my last day there). I was going to spend time sorting out the myriad things that never get done in my home office but, unfortunately, some decorating (ahead of a replacement bathroom) has got in the way and this got me thinking about some decorating tips I’ve picked up over the years:

  • Cheap paints can be a false economy. Almost every wall I’ve ever painted with a DIY (B&Q, Homebase, etc.) paint has looked tired after a while, whilst walls painted with branded paints seem to have kept their finish for longer.  I don’t know how much difference it makes but I always buy Dulux Trade paints from my local decorators’ merchant (Brewers) rather than the consumer Dulux paints from a DIY shed.
  • Having said that it’s sometimes worth buying branded paints, there are jobs where it’s just not worth the expense. All of our ceilings are painted with white emulsion (Albany Supercover), and the high-traffic rooms with magnolia walls (like our halls/stairs/landing) also use decorators’ merchant paints. I’ve just accepted that they need painting more often anyway!
  • There’s no such thing as “one coat” paint – it’s pure marketing! Some paints are thicker than others though and you might get away with fewer coats (for example two coats rather than three when trying to cover a bold colour).
  • Kitchen and bathroom paint is not just a way to sell a more expensive product – it really is moisture-resistant (and I really do wish I’d used it in our en-suite…) but there is an alternative. For my current job, in order to have the finish that I want (matt ceiling, soft sheen walls), I used the same paints as normal but added some VC175 mould-killer to the paint before applying it (as recommended by the Manager at my local decorators’ merchant).
  • Brushes and rollers can be wrapped in cling-film overnight; trays of paint can be placed in a bin liner (folded over to keep the air out). This saves a lot of paint wastage between coats, when you need to come back to the job the next day anyway! Of course, brushes, rollers, etc. should always be properly cleaned when the job is finished.
  • Most decorating jobs will need some holes filling, or minor repairs to plasterwork. After years of fighting with (and losing to) consumer-marketed products like Polyfilla (from Polycell), I found a product that’s really easy to work with and does a great job – unfortunately it only comes in large bags! It’s called Gyproc Easi-Fill and it’s made by British Gypsum. (This tip came via a professional plasterer and was recently reiterated by our bathroom fitter.) Even though I’ve only used a tiny amount of our huge bag of Easi-Fill over the years, it doesn’t seem to have “gone off” and is still working well – I’ve also used it as plaster for modelling purposes.
  • Baby wipes are great for cleaning up – like if you didn’t mask a door handle (because you weren’t painting the door) but it got splattered with specks of emulsion from the roller… actually, baby wipes are great for cleaning all sorts of things!

Just bear in mind that I wouldn’t take IT advice from a professional decorator – so those who paint people’s houses for a living might not entirely agree with my decorating advice!


Replacement PSU for an LCD monitor? If only these things were standardised…

Sod’s law says that, a few hours after I handed in all of Fujitsu’s kit in preparation for leaving the company, my own monitor would stop working…

I had spares, but only old 15″ 4×3 flat panels with VGA connections – this was the only monitor I have that will take an HDMI/DVI signal from my Mac Mini, or my Raspberry Pi so VGA was no good to me here.

As it happened, further investigation showed it wasn’t the monitor itself (although it is 9 years old now) but the power “brick”. I’m sure there are websites that specialise in selling universal power supplies for laptops but I haven’t found one yet for LCD monitors (I needed a 60W/12V/5A supply with a 2.5mm centre-positive tip).

Thankfully, my local Maplin store had something that would do the trick – a little expensive at £37.99 but far cheaper than a new monitor…

It does beg the question though – all mobile phones (except Apple iPhones) come with a standard USB charging cable. Why don’t all TVs/monitors/laptops have similarly standardised power supplies?


Working with the Exchange 2013 Server Role Requirements Calculator

For the last couple of years, I’ve had the privilege of leading a team of talented Exchange and Lync subject matter experts (either directly or more recently as a virtual team).  I’ve tried to keep up my technical skills but inevitably I’m not at the level of detail I once was and, as I switch to a more technical role, I’m expecting to have to re-learn a lot.

Thankfully, I won’t be alone – the team I’m joining is also full of talented subject matter experts – but I will need to stand on my own two feet.

That’s why I thought I’d write this post, with a few notes that are based on a recent conversation when my former colleague Mark Bodley checked over the Exchange calculations I’d used to create a guide price for a customer solution… [I’ve since made a couple of extra edits based on advice from Fujitsu’s resident Exchange Master, Nick Parlow]


To get started, some educated guesses can be made to adjust the defaults:

  • Exchange environment configuration:
    • 64-bit GCs, multi-role servers, non-virtualised, HA deployment (in line with our design principles).
    • Set number of servers per DAG and number of DAGs according to size of solution. Start DAG count low to keep hardware down – revise up later if required based on guidance elsewhere in the tool.
  • Site resilience configuration:
    • Single active/active resilient DAG (i.e. no passive servers).
    • Watch the RPO value – it won’t affect the server count but could be a factor in inter-datacentre bandwidth calculations.
  • Mailbox database copy configuration:
    • Increased HA database copies to 4; decreased lagged copies to 0 (lagged copies require more storage as transaction logs are retained for longer).
    • Increased number of HA database copies in secondary datacentre to account for non-lagged copies (in this case I had a 50/50 split between DCs).
  • Lagged database copy configuration: not used.
  • Exchange data configuration: left at defaults.
  • Database configuration: left at defaults.
  • Exchange I/O configuration: left at defaults
  • Transport configuration:
    • Set Safety Net expiration to 2 days (unless using lagged copies).
  • User mailbox configuration:
    • One tier for each logical group of users.
    • Days in work week set to 5 – unless there are significant groups of people working 7 days a week (affects log replication).
    • Mailbox configuration values will depend on organisation but the advice given for a starting point was to set:
      • Total send/receive capacity per mailbox per day to 100 (down from 200) messages.
      • Average messages size to 150KB (affects storage).
      • Mailbox size limits are used for capacity growth planning – split between main and personal archive should not affect the calculations.
      • Deleted item retention window may need to be increased to meet requirements (14 days is low if not using backups).
      • Multiplication factors come into play with mobile devices.
      • Desktop search engine value is more significant in Citrix environments than with Outlook cached mode clients.
  • Backup configuration:
    • Assumed software VSS via SCDPM, daily full backups.
    • Calculator will take highest value of backup/truncation failure tolerance and network failure tolerance (so only one of these really matters!).
  • Storage options:
    • JBOD with multiple databases per disk (in line with our design principles).
    • Main thing to watch here is the number of auto reseed volumes per server – align to failure rate of the disks.
  • Disk configuration: will depend on servers in use. For reference, with our Fujitsu PRIMERGY servers, the available options led us to:
    • 900GB system disks (10K RPM SAS 2.5″).
    • 4000GB database plus log disks (7.2K RPM SATA 3.5″).
    • 4000GB restore volume (7.2K RPM SATA 3.5″).
  • Processor configuration:
    • Core/server is of secondary interest but helps the calculator to advise on Global Catalog cores.
    • SPECint2006 value set to the lowest that will allow a suitable server utilisation number. In practice there will be a balance between servers/processor options and price. It’s useful to have a lookup of SPECint2006 values for a range of servers (e.g. Fujitsu PRIMERGY RX300) or you can search the SPECint site and take the value from the result column.
  •   Log replication configuration:
    • Will vary according to environment (and international spread of users) but rule of thumb used was (this should add up to 100%):
      • 1: 0.33%.
      • 2: 0.24%.
      • 3: 0.24%.
      • 4: 0.2%.
      • 5: 0.82% (I’m not sure why we have a night-time peak – automated emails sent overnight? Or this could be spread across others…)
      • 6: 0.31%.
      • 7: 0.34%.
      • 8: 1.46%.
      • 9: 6.46%.
      • 10: 10.08%.
      • 11: 10.55%.
      • 12: 11.06%.
      • 13: 9.48%.
      • 14: 9.61%.
      • 15: 10.52%.
      • 16: 10.41%.
      • 17: 8.31%.
      • 18: 5.07%.
      • 19: 1.94%.
      • 20: 0.95%.
      • 21: 0.84% (I might be tempted to up this slightly for the evening email check in many organisations…).
      • 22: 0.36%.
      • 23: 0.21%.
      • 24: 0.21%.
    • Network configuration: latency will depend on available datacentre connectivity.
  • Environment customisation: only used to generate naming configuration.

Check the results

With the main inputs in place, some fine tuning is probably required:

  • Role requirements:
    • Watch out for errors (e.g. over-utilised servers).  This is where the server count may need to be adjusted, or a higher-specification server used (SPECint2006 values). Ideally, servers should be close to 80% utilised [in a failure scenario] but not over. Getting the right CPU is more critical than memory as RAM can normally be added later! [also bear in mind the impact of other software on utilisation – for example file level antivirus, and any message hygiene software running on the box.]
    • Also check the recommended transport database location (we’ll come back to that in a moment).
  • Volume requirements:
    • Watch out for disk sizes/configurations that can’t be met!  If the solution is storage-bound, then it may be necessary to add additional servers (we use direct attached storage as a design principle) or, if available, then larger commodity disks would help. One clue to a potential issue is if the DB and log volume design/server table has DBx as a database copy name, rather than DB1-DB4, etc.
  • Storage design: confirms (or otherwise) that the solution can be deployed in a JBOD configuration (as per our design principles, although RAID is also an option).
    • This worksheet will also give the number of disks required in each server. Note that this is the count for database and log disks, restore volumes and auto reseed volumes – don’t forget the system disks for operating system/application binaries… and then think about the transport database.
    • In my solution, the recommended transport database location (from the role requirements) was the system disk but we were using 900GB 10K RPM SAS 3.5″ disks. With OS (150GB), Anchor LUN (2GB), Exchange binaries (50GB), and consideration for Exchange Logs (maybe 600GB) those disks are already pretty full, so it’s worth considering an extra RAID 1 volume for the transport database (over-ruling the calculator) and possibly hot spares for the RAID volumes too (depending on your attitude to risk).

A couple of other points to consider: public folders and unified messaging. I didn’t need public folders but don’t overlook them in your plans if they are in use.  As for unified messaging, it’s no longer a server role so is included in the calculations, up to a point – UM is likely to hit the CPU load, so it might be prudent to factor in some additional headroom with the SPECint2006 of the servers (to keep the utilisation well below the 80% mark).


Of course, there are just my notes on some of the things I checked to fit our reference design and they are not exhaustive. As the saying goes, your mileage may vary. For reference, I was using Exchange 2013 Server Role Requirements Calculator v6.6 but you should always use the latest available version and there is a very useful readme file which should be referenced when working with the calculator. You should also factor in this guidance on sizing Exchange 2013 deployments and consider the Exchange 2013 performance recommendations too.


[Edited 16 May 2015 to clarify comments re: server utilisation]


Replacing an all-in-one OfficeJet with a colour laser printer and some free software

One downside of moving jobs is that I’ve had to give back all of the kit I was using that belongs to Fujitsu*. The car went back last month at the end of its lease but yesterday I returned a pile of technology to the office including mobile phone, laptop, monitor, printer.

Hang on. Printer. I’m not the only user of that particular device…

I never liked it anyway – I’ve had a succession of OfficeJet all-in-one devices since I swapped out my trusty old LaserJet for a company-supplied printer and I’ve found inkjet devices to be expensive in consumables (non-OEM cartridges gunking up; OEM cartridges running out even when they say they have ink in them) and the HP OfficeJet 4620 that I’ve used for the last couple of years was particularly unreliable from a software perspective too. So I decided to pick up a small-office laser printer instead and the Samsung SL-C410W was just £130 for a colour laser printer.

Of course some will say, if I think ink cartridges are expensive, wait until I have to buy toner and the other items that the new printer will need but we’re talking in thousands of pages here… for someone who gets through about a box of paper (2500 sheets) every 2 years or so (and half of that has been taken by the kids for drawing)!

Anyway, back to the point. The SL-C410W was available at a great price direct from Samsung (£20 cheaper than John Lewis or PC World – and Staples were way off the mark), with free next-day delivery. Setup was simple, following the supplied instructions to get connected to my Wi-Fi network (although I did install the software on a PC and use the supplied USB cable to make things easy).  There were a couple of points that it might have been useful to know though:

  • Setting a static IP address needed a connection to the printer’s SyncThru web service – either using the supplied software to find the device on the network or using the DHCP logs to work out which IP address it was using and going to http://ipaddress/sws/index.html.
  • Once in SyncThru, login is required to make changes – default username is admin and password is sec00000.

With the password and IP address changed and discovery services configured, our family PC (running Windows 8.1) automatically found and connected to the printer, whilst the Windows 7 PCs only needed me to walk through a wizard (printer and driver location was automatic).

That just left the issue of copying – a feature on the OfficeJet that we do use sometimes. Here, some open source software called iCopy came to the rescue.  It does exactly what it says on the tin – provides a “free photocopier” by linking a scanner and a printer – nothing that can’t be done manually but a single button was helpful for family members who use this feature.

The only slight problem was locating Windows Image Aquisition (WIA) drivers for my elderly CanoScan N650U/N656U with Canon not offering anything for Windows 7 and the Internet seemingly littered with dead links.  Luckily, Tom Heath has posted a link to the drivers and these worked a treat.

Only time will tell whether the SL-C410W was a wise buy or not – but at least my family have a means to print homework, my wife has a printer (and copier) again for her work, and I have something that should be reasonably reliable and hassle-free…


* There are lots of upsides too – including that my new “laptop” will be a Surface Pro 3, and that I’ll be using modern software to help me in my work.

Waffle and randomness

Moving on…

Just under ten years ago, I wrote a blog post to say I was leaving Conchango, and (re-)joining Fujitsu (it was ICL when I left).  Since then, I’ve moved through a succession of roles (technical, IT strategy and governance, management and pre-sales), worked with some extremely talented people and I’ve had some good times (as well as some less good) but one of the highlights has to be when I was given a Fujitsu Distinguished Engineer award last year.

Receiving a Fujitsu Distinguished Engineer award from Michael Keegan (Head of UK and Ireland region) and Jon Wrennall (CTO), in October 2014

Now, that time has come to an end, because today’s my last day at Fujitsu before I take up a new role in just over a week’s time.

For those who didn’t see my tweet last month, I’ll be returning to technical consultancy, joining the unified communications team at risual.

risual is a dedicated, UK based, globally recognised IT Services organisation delivering business aligned consultancy, solutions and services based solely on the Microsoft platform.  Along with several thousand others, I first came across risual when their corporate video was launched at Microsoft Future Decoded last year and what a refreshing change it made! Digging a little deeper told me they have a great reputation – and that’s capped off by appearing in The Sunday Times’ top 100 best small companies to work for list.

I have to admit I am a little anxious about the move – but really excited too and looking forward to joining the risual “family” and getting stuck in.  And, if ever there was proof of what a small industry we work in, I already found that I’m linked to quite a few of my new colleagues through Twitter or this blog!


Public key infrastructure explained

Last week, I was attending a presentation skills course where we had to give an impromptu presentation (well, we had an hour to prepare) on a topic of our choice.  One of my colleagues, Richard Butler, gave his talk on public key infrastructure (PKI) and Richard was the first person who has explained PKI to me in a way that made me go “ah! got it!” because he used a great analogy.

So, I’m going to attempt to repeat it here (with Richard’s permission)… and hopefully I’ll get it right!

Richard’s first point was that PKI is thought of as a security tool, some technology, or something that’s needed to make the network secure. Actually, he suggests, there’s more to it than that…

The first example Richard gives is one of a server certificate (used to ensure that a service can be trusted and that confidentiality is maintained), illustrated by way of border control.

An airline passenger approaches a border (e.g. an immigration desk at the airport):

  1. The border is where the passenger expects it to be.
  2. A border guard wears a uniform, with an insignia (badge).
  3. The passenger recognises the insignia and trusts it as genuine.
  4. The passenger interacts with the border guard to negotiate entry to the country.

A server certificate is similar because it’s presented to prove that the server is who they say they are and is trusted by users accessing its services. The certificate is issued by a certificate authority, just as the border guard’s badge is issued by a government agency.

In Richard’s second example, a certificate is used to provide confidence that you are who you say you are, a process known as integrity or repudiation.

  1. As a citizen of a country, I request a passport from my government.
  2. The government validates my request.
  3. If my request is valid, a passport is issued.
  4. When visiting a foreign country, I present my passport at the border.
  5. The government of the foreign country trusts the government that issued the password to have carried out the necessary background checks that confirm I am who I say I am.
  6. I’m authorised to enter the country.

In this case:

  • The issuing government’s passport authority can be thought of as a certificate authority (CA) or issuing authority (IA) – it’s trusted by other countries to authorise passports.
  • The passport can be thought of as a validated “client” certificate – it is trusted, because the passport authority is trusted (i.e. there is a chain of trust).
  • The government in the foreign country can also be thought of as a certificate authority – it is trusted and authorises the immigration control.
  • As described in the first example, the border guard’s insignia can be thought of as a “server” certificate – it is trusted as the foreign country is trusted to issue certificates.
  • Humans apply logic to the approach and automatically make the appropriate assumptions and associations.

In a public key infrastructure, there’s a hierarchy of certificate authorities:

  • The offline root CA signs requests for sub-ordinate servers and holds the private key for the certificate root.
  • A networked, subordinate CA signs requests for clients, and holds its own private key.
  • A certificate distribution point stores the public keys for the root CA and the subordinate CA (used to validate requests). It also holds information about certificate revocation (to use the passport analogy, this might be where a citizen has been denied the right to travel, for example due to a pending prosecution).

Using this PKI infrastructure a number of interactions take place:

  1. A device creates a signing request and sends it to a certificate authority.
  2. The CA receives the signing request, validates the request, and issues a certificate signed with its private key.
  3. The original device receives the signed certificate and stores it for future use as a client/server certificate.
  4. When a connection to a service is attempted, the connecting device receives a copy of the certificate and validates the name and signing CA using their public key. This validates the certificate chain and the certificate is proved to be valid.

At the outset of this description, Richard explained that there is more to PKI than just a security tool, or some technology services.  There’s actually a hierarchy of deployment considerations:

  • Private key protection. Private keys are critical to the ability to sign certificates and therefore crucial to the integrity of the chain of trust.
    • A chain is only as strong as its weakest link.
  • Management procedures:
    • Validation of requests (stopping fraudulent certificates from being issued).
    • Management of certificates (issuing, revocation, etc.)
  • Deployment procedures:
    • Deploying and managing the PKI infrastructure itself.
  • Technology choices:
    • Whose PKI infrastructure will be used?

Drawn as a hierarchy (similar to Maslow’s hierarchy of needs), technology choices are at the top and are actually the least significant consideration.  Whilst having a secure technical solution is important, having the procedures to manage it are more so.

Richard wrapped up his presentation surmising that:

  • PKI is 10% technology and 90% process.
  • Deployment is 10% of the solution and management is 90%.
  • PKI needs management from day one.

If you do still want to know more about the technology (including seeing some diagrams that might have helped to illustrate this post if I’d had the time), there’s a Microsoft blog post series on designing and implementing PKI, written by the Active Directory Directory Services team.  Other PKI solutions exist, but as many organisations have an Active Directory, looking at the Microsoft implementation is as good a place as any to start to understand the various technologies that are involved.


Short takes: Lync/Skype and browsers; Bitlocker without TPM; OS X Finder preferences; and MyFitnessPal streaks

A few more short mini-posts from the items that have been cluttering my browser tabs this week…

Lync/Skype for Business meetings start in the Web App

A few days ago, a colleague highlighted to me that, whenever she joined a Lync meeting from our company, it opened in Lync Web App, rather than using the full client. Yesterday I noticed the same – I tried to join a call hosted by Microsoft and the Skype for Business Web App launched, rather that the Lync client installed on my PC. It turns out that this behaviour is driven by the default browser: mine is Chrome and my colleague was also using something that’s not IE. Quite why I’d not seen this before, I don’t know (unless it’s related to a recent update) but for internal Lync meetings I do tend to use the Join Online button in the meeting reminder – that doesn’t seem to appear for external meetings. Of course, you can also control which client is used by editing the URL

Using Bitlocker on drives without TPM

When my wife asked me to encrypt the hard drive on her PC, I was pleased to be able to say “no need to buy anything, we can use Bitlocker – it’s built into Windows”. Unfortunately, when I tried to enable it, I found that her PC doesn’t have a trusted platform module (TPM) chip. I was pretty sure I’d worked around that in the past, with a netbook that I used to run Windows 7 on and, sure enough, found a How To Geek article on How To Use BitLocker on Drives without TPM. It’s been a while since I had to dive into the Local Computer Policy but a simple tweak to the “Require additional authentication at startup” item under Computer Configuration\Administrative Templates\Windows Components\Bit Locker Drive Encryption\Operating System Drives was all it took to let Windows encrypt the drive.

Finding my files in Finder

One of the challenges I have with the Mac I bought a few months ago, is that modern versions of OS X seem to want to hide things from me. I’m a “browse the hard drive to find my files” kind of guy, and it took a tweak to the Finder preferences to show my Hard Disk and bring back the shortcut to Pictures.

MyFitnessPal streak ends – counter reset

Last weekend some connectivity issues, combined with staying away with friends meant I missed the cut-off for logging my food/exercise with MyFitnessPal and my “streak” was reset (i.e. the login counter). Knowing that I’ve been logging activity for a certain number of days is a surprisingly motivational piece of information but it turns out you can get it reset using the counter reset tool (which even predicted how many days the value should be – 81 in my case).


Office Remote for Windows Phone

Over the next couple of days, I’ll be attending a “presentation masterclass”.  My last formal training in this area was twenty years ago, as a graduate trainee at ICL, so I’m hoping things will have moved on considerably since then in terms of the techniques and advice on offer!

Anyway, attending the course reminded me to blog about something I was introduced to last year by my colleague, Warren Jenkins.  Those of us with Windows or Android phones can use the Office Remote app to control PowerPoint – no need for a “clicker” – just a phone (running Windows Phone 8.x with the Office Remote app – or  Android 4.0.3 or later with the Office Remote for Android app) and a Bluetooth connection to a Windows PC (Windows 7 or 8.x), with the Office Remote PC plug-in for Office 2013.

Once Office Remote PC is installed, and the PC is connected to the phone, open the Office file that you would like to present and, on the Office Remote tab, select Office Remote > Turn On.

Then, go to the phone and make sure it’s running the Office Remote app and, if all is working well, you’ll see a list of open Office files and you can pick the one to present.  For example, in PowerPoint you can see speaker notes and control the presentation, with options to view in slide sorter mode or to use a virtual laser pointer to highlight points on the slide. You can also control other Office applications (e.g. interacting with data and switching between worksheets in Excel, or jumping around between headings or up/down a document in Word), but I’ve only used it in anger with PowerPoint.

More details are available on the online help page.


Adding a pause when dialling a number from a softphone or mobile phone

Back in the days when Nokia phones had monochrome screens and batteries lasted for days, a colleague explained to me how to include a p in a number to make the phone pause before dialling the next few digits – for example when entering a PIN for voicemail (I think w also worked for a wait). More recently, another colleague was asking me how to do this with our CUCILync softphones when dialling into a conference call (as described for Microsoft Lync).

Well, it seems the modern equivalent of a p for a soft pause is inserting a comma (at least it is on a Lumiaon an iPhone and on Android) and a semi-colon is a hard pause/wait (on an iPhone). Unfortunately the CUCILync client we use strips out , and ; (and, even worse, it replaces p with 7). I guess it could be an error in the dial-plan but it’s inconvenient…


Amazon AWS Summit highlights (#AWSSummit)

I spent a chunk of time at Amazon’s AWS Summit in London earlier this week. It was interesting to be back at the ExCeL exhibition and convention centre as the last time I was there was another big vendor event – Microsoft Future Decoded – and on the topic of organisation, let’s just say that whilst Amazon had the registration sorted (Microsoft had some issues with that last year), Amazon’s schedule meant we couldn’t get to the technical tracks because of the queues for the escalator (there were no stairs that I could find!).  It seems that ExCeL’s ICC has some logistical challenges when dealing with a few thousand people moving from one level to another!

I was interested to see which partners were exhibiting in the Expo though:

There seem to be plenty of opportunities around cloud services but one set of partners who noticeably absent were the big systems integrators. As we moved to the keynote, I couldn’t tweet the highlights as connectivity became an issue (standard conference issues around WiFi and mobile phone access – plenty of signal but too many sharing it) but I settled into Dr Werner Vogels’ (@Werner) presentation and found it interesting on a number of levels:

  • Whilst there were announcements about Amazon releases, many of the points made were equally applicable to other clouds (e.g. I could apply the same issues and learning to Microsoft Azure):
    • Start-ups have no legacy/dependencies, a low cost structure and can move quickly to disrupt log-standing industries.
    • Not that long ago millions of dollars were required to start an Internet business. It was “indirect funding of the American server industry” – but not any more: cloud infrastructure and platform services have a low barrier to entry and no up-front costs or vendor tie-in.
    • The cloud’s not just for start-ups: enterprises can benefit too (although I’d argue that variable costs are a challenge for some finance departments and new features arriving daily are not always a good thing for end users).
    • Automation is key [in IaaS/PaaS]: whether it’s for testing, building, deployment or infrastructure creation.
    • Real-world workloads come in all shapes and sizes – there’s no need to standardise on the lowest common denominator when you can change the services you use to fit. With cloud you can make mistakes (fail fast and move on) that would be expensive using physical servers or even a virtual infrastructure platform.
    • AWS is a platform and an ecosystem – some organisations are creating their own platforms on top of AWS (e.g. Omnifone’s B2B media platform).
    • Invention is continuous, with new services, and a movement towards micro-services based on smaller blocks (Dr Vogels used a Tetris analogy), containerisation, event-driven computing, etc.
    • Security is a shared responsibility – Amazon provide the tools (and a workbook for compliance with local laws) and customers need to use them correctly.
    • There may be compliance benefits from the cloud, for example: if you store customer data then you become a data controller – and if you process it on AWS, Amazon becomes a data processor; by signing the AWS data processing agreement organisations comply with EU data processing requirements known as article 29, providing assurances that are not available with on-premises services.
  • The stats involved (and Amazon’s growth) are enormous:
    • 102% year on year increase in data in/out of S3.
    • 93% year on year increase in usage of EC2.
    • Over 1 million active customers (every month – excluding amazon.com) – from start-ups to enterprise, in many vertical markets.
    • Five times the compute capacity of the other 14 providers in the Gartner Magic Quadrant for cloud infrastructure as a service combined. 
    • AWS’ pace of innovation is such that 516 major new features and services were launched in 2014 – almost double the previous year (and that’s a pattern with: 24 in 2008; 48 in 2009; 61 in 2010; 82 in 2011; 159 in 2012; 280 in 2013; 516 in 2014) – it’s actually really hard to keep up (and I’d say the same for Azure too!).
  • Whilst Amazon is leading this sector, they did not come across as arrogant – indeed Dr Vogels highlighted that the Amazon motto of being “the earth’s most customer-centric company” applies to AWS too. Customers are in charge, not providers and, if Amazon’s not providing what they need, customers will walk away – perhaps not literally (there’s no lock-in but moving workloads is non-trivial) but they will use another cloud for their next project/programme.

One message that I found difficult to swallow was that “hybrid IT is part of the journey, not the destination“. Maybe that’s true in the long term but, in my world of Microsoft cloud services, I see real use cases for hybrid clouds (I’ll be writing more on that soon).

Maybe it’s because I’m coming from a SaaS angle though, rather than IaaS and PaaS – I did attend a hybrid cloud deep dive session at the AWS Summit but missed the first 10 minutes as I was delayed by the need for micro-escalator-services or the “escalator of things” (credit to Vitor Domingos @VD for that thought).

I can see why, when running your own applications, there is little need for long term usage of on-premises infrastructure or so-called “private cloud” platforms once you’ve taken advantages of the efficiencies that cloud IaaS/PaaS provides. So, creating a hybrid cloud may be a bridge to the future state, rather than a two-way street; but maybe there are limitations about the data that you want to store in the cloud? Perhaps the cloud BC and DR is still too risky (although maybe you could consider multiple clouds)? Or connectivity challenges might exist that rule out exclusive cloud usage? Or perhaps I’m just a Luddite!

One thing’s for sure – applications shouldn’t be moved to the cloud “as is” – cloud migration is an opportunity to rethink how things work – and Amazon has an AWS Cloud Adoption Framework that might help with that.

Unfortunately, I couldn’t stay for the whole day – there were many more technical tracks I wanted to attend (and huge queues to get into the sessions) but I had an appointment elsewhere. The AWS Summit was certainly a worthwhile investment of my time though (even as a Microsoft consultant) – I’ll be watching out for it again next year.

In conclusion, I’ll make just the one observation: systems integrators need to find new opportunities to deliver value in an increasingly agile and commoditised world – for example as cloud integrators – and they need to move quickly. Incidentally, this is not news – but for a while now we’ve been able to say “cloud is not for everyone”. Increasingly though, those barriers to cloud adoption are being broken down.

Further reading

%d bloggers like this: