Bulk renaming digital photos for easier identification

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Managing digital content can be a pain sometimes. Managing my own photos is bad enough (and I have applications like Adobe Lightroom to help with my Digital Asset Management) but when other family members want help whilst sorting through thousands of photos from multiple cameras to make a calendar or a yearbook it can get very messy.

For a long time now, I’ve used a Mac app called Renamer to bulk rename files – for example, batches of photos after importing them from my camera.  The exception to this is my iPhone pictures, which Dropbox rather usefully renames for me on import using the date and time, suffixing as necessary to deal with HDRs, etc. I only have a couple of gigabytes on Dropbox, so I move the renamed files to OneDrive (where I have over a terabyte of space…). The problem with this is that there needs to be enough space on Dropbox for the initial import – which means I have to use a particular PC which recognises my iPhone and remembers which photos have previously been imported. If I use another PC it will try and re-import them all (and fail due to a lack of available space)…

My wife has a different system. She also uses OneDrive for storage but has some files that have been renamed by Dropbox (to yyyy-mm-dd hh.mm.ss.jpg), some that have been renamed on import by something else (to yyyymmdd_hhmmsssss_iOS.jpg) and some that are just copied directly from the iPhone storage as IMGxxxx.jpg. My task? To sort this lot out!

Multiple images with the same time stamp

We decided that we liked the Dropbox name format. So that became the target. I used Renamer to rename files to Year-Month-Day Hour.Minutes.Seconds.jpg (based on EXIF file data) but the presence of HDR images etc. meant there were duplicates with the same time where a whole second wasn’t fine-grained enough. We needed those fractions of a second (or a system to handle duplicates) and Renamer wasn’t cutting it.

The fallback was to use the original filename as a tie-break. It’s not pretty, but it works – Year-Month-Day Hour.Minutes.Seconds (Filename).jpg gave my wife the date/time-based filename that she needed and the presence of the original filename was a minor annoyance. I saved that as a preset in Renamer so that when I need to do this again in a few months, I can!

Renaming digital photos in Renamer using a preset

No EXIF data

Then the files with no EXIF data (.MOVs and .PNGs) were renamed using a similar preset, this time using the modification date (probably less reliable than EXIF data but good enough if the files haven’t been edited).

Thousandths of seconds

Finally, the files with the odd format. Mostly these were dealt with in the same was as the IMGxxxx.jpg files but there were still some potential duplicates with the same EXIF timestamp. For these, I used progressive find and replace actions in Renamer to strip away all but the time a RegEx replacing ...... with nothing allowed me to remove all but the last three characters (I originally tried .{3}$ but that removed the 3 characters I actually wanted from the tail end that represent thousandths of seconds). One final rename using the EXIF data to Year-Month-Day Hour.Minutes.Seconds.Filename.jpg gave me yyyy-mm-dd hh.mm.sssss.jpg – which was close enough to the desired outcome and there were no more duplicates.

What’s the point? There must be a better way!

Now, after reading this, you’re probably asking “Why?” and that’s a good question. After all, Windows Explorer has the capability to provide image previews, the ability to sort by date, etc. but it’s not up to me to question why, I just need an answer to the end-user’s question!

Using Renamer is reliant on my Mac – there are options for Windows like NameExif and Stamp too. I haven’t used these but it appears they will have the same issues as Renamer when it comes to duplicate timestamps. There’s also a batch file option that handles duplicate timestamps but it doesn’t use the EXIF data.

Meanwhile, if anyone has a script that matches the Dropbox file rename functionality (including handling HDRs etc. which have identical timestamps), I’d be pleased to hear from you!

[Update 1 January 2017: These Python scripts look like they would fit the bill (thanks Tim Biller/@timbo_baggins) and James O’Neill/@jamesoneill reminded me of ExifTool, which I wrote about a few years ago]

Recovering data when the Zwift iOS app crashes whilst saving an activity

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the most popular posts on the blog at the moment is about recovering data when a Garmin cycle computer crashes whilst saving an activity. It’s great to know that my experience has helped others to recover their ride data and I’m hoping that this post will continue in the same vein… but this time it’s about Zwift.

You see, earlier this week, I decided to try out the new Zwift app for iOS. It’s much easier to use my iPhone than to take a PC out to the garage and use a mobile app as a bridge between the turbo trainer and the Wi-Fi network. Instead, it’s all taken care of in the app.

Unfortunately, after an hour on the trainer, I went to end my ride and Zwift told me it couldn’t log me in (and refused to let me in on the iPhone until I forcibly closed the app).  Logging in on another device told me that partial ride data had been captured for the first 10 minutes but that was it.  I wasn’t happy and my usual petulant self resorted to a whinge on Twitter, to which I was really surprised to get a reply from the team at Zwift:

A few minutes later I logged a support call and was directed to some advice that helped me recover the .FIT file created on my device by Zwift:

If you’re riding on iOS, you can reach your .fit file through iTunes.
1. Plug your device into your computer and open up iTunes.
2. Click on your device in iTunes, then click “Apps” and scroll down to the “File Sharing” section.
3. You should see Zwift listed, and it should have a “Zwift” folder. Click that, and then click “Save To” and save it to a location of your choice.
4. Find the saved Zwift folder, and copy the fit file out of the Zwift/Activities folder.

After this, I could upload the .FIT file to Strava (though not to Zwift itself… apparently this is “a highly requested feature” and “as such, [Zwift are] exploring adding it in the future”):

My first attempt at home automation: Alexa-controlled Christmas tree lights!

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve wanted to play with some home automation tech for a while now but the absence of clear standards and relatively high cost of entry (with many of the popular systems requiring a hub) put me off. We also have the relatively unusual situation in that there is nearly always someone in our house, so even the idea of a smart thermostat to turn down the heating when we’re out (e.g. Nest, Hive, etc.) is not a good fit for our lifestyle.

Then, a few weeks ago, I bought myself an Amazon Echo Dot. My business case was weak. After all, voice controlling my Spotify playlists is a bit of a novelty (and not all of the “skills” that are available in the US are available over here in the UK) but I could see there was potential, especially once IFTTT support comes to the UK (which it finally did, last week).

Over the weekend, I decided to take things a little further and to see if I could control some lights. I wasn’t getting far with the fixed infrastructure in the house (due to challenges with a lack of a hub and B22 vs. E27 light fittings meaning I’ll need to convert some of the light pendants too…) but my colleague Richard Knight (@rmknight) suggested a TP-Link Smart Plug and I picked up a TP-Link HS110 on Amazon. Then it was a case of deciding which lights to control – and I went for the Christmas Tree lights…

After a fairly simple setup process (using TP-Link’s Kasa mobile app) I had the switch working from my phone but then I took things a step further… voice control with Alexa. Kasa and Alexa are integrated so after enabling the skill in Alexa and linking to Kasa, I was able to control the lights with my voice:

Everything I’ve described so far was just a few minutes’ effort – remarkable straightforward out-of-the-box stuff. Being a geek, I started to think what else I could use to control the Smart Plug and I found an excellent post from George Georgovassilis (@ggeorgovassilis) on controlling a TP-Link HS100 Wi-Fi Smart Plug. In the post, George sniffs the network to work out exactly how the smart plug is controlled and he provides a shell script, then others have chipped in with C# and Go options. I had to find which IP address my Smart Plug was using (looking at the DHCP server leases told me that, then a MAC address lookup confirmed that the unknown device starting 50:c7:bf was indeed a TP-Link one).  Using the compiled C# code I successfully controlled my HS110, switching the lights on/off from the command line on a PC.

So, last year saw a miniature Raspberry Pi-powered Christmas Tree (featuring my terrible soldering) – this year I’m controlling the full tree lights using mainstream components – I wonder what Christmas 2017 will bring! And my second Echo Dot is arriving today…

Typography and information (Matthew Standage at #MKGN)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve often written about Milton Keynes Geek Night on this blog – there was even a time when I used to write up the whole night’s talks when I got home. Now I need my zeds too much and I never seem to get around to the writing part! Even so, I saw Matthew Standage (@mstandage) give a great talk at last week’s MKGN and I thought I really should share some of what he talked about here.

Matthew spoke about on the importance of typography on user experience and his key point was that, far from being just the choice of typeface (font), typography is the primary medium by which we communicate information to our users on the web.

95% of the web is typography (or more accurately written language) and it’s the way we interpret, divide and organise information.

When designing a website, hierarchy is not everything. Instead, consider what’s the most important information to the reader. Its not always the page title.

“Really?”, you might ask – so consider the UK Bank Holidays page at gov.uk. Here the Level 1 heading of “UK bank holidays” is less important than the big green box that tells me when I next get a statutory day off work:

Gov.UK website UK Bank Holidays page

Next, Matthew explained, we need to think about proximity – which objects are placed together, how groups work, the use of white space. For this, read Mark Boulton’s Whitespace article on the A List Apart site (the article has been around for a while but is still valid today). Whitespace can help to identify different types of information: headings; links; authors; image captions; etc.

In general, users won’t read text in a word by word manner – but the typography helps readers to scan the page. Jakob Nielsen describes this in his article about the F-shaped pattern for reading web content.  Though, if that’s true, you won’t have read this far anyway as you’ll have pretty much stopped after the second paragraph…

Matthew’s slides from his talk are on speakerdeck and I’m sure the audio will appear on the MKGN SoundCloud feed in due course:

IT transformation: why timing is crucial

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my work, I regularly find myself discussing transformation with customers who are thinking of moving some or all of their IT services to “the cloud”.  Previously, I’ve talked about a project where a phased approach was taken because of a hard deadline that was driving the whole programme:

  1. Lift and shift to infrastructure-as-a-service (IaaS) and software-as-a-service (SaaS).
  2. Look for service enhancements (transform) – for example re-architect using platform-as-a-service (PaaS).
  3. Iterate/align with sector-wide strategy for the vertical market.

The trouble with this approach is that, once phase 1 is over, the impetus to execute on later phases is less apparent. Organisations change, people move on, priorities shift. And that’s one reason why I now firmly believe that transformation has to happen throughout the project, in parallel with any migration to the cloud – not at the end.

My colleague Colin Hughes (@colinp_hughes) represented this in diagrammatical form in a recent presentation (unfortunately I can’t reproduce it on my personal blog) but it was interesting to listen to episode 6 of Matt Ballantine and Chris Weston’s WB-40 podcast when they were discussing a very similar topic.

In the podcast, Matt and Chris reinforced my view that just moving to the cloud is unlikely to save costs (independently of course – they’re probably not at all bothered about whether I agree or not!). Even if on the surface it appears that there are some savings, the costs may just have been moved elsewhere. Of course, there may be other advantages – like a better service, improved resilience, or other benefits (like reduced technical debt) – but just moving to IaaS is unlikely to be significantly less expensive.

Sure, we can move commodity services (email, etc.) to services like Office 365 but there’s limited advantage to be gained from just moving file servers, web servers, application servers, database servers, etc. from one datacentre to another (virtual) datacentre!

Instead, take the time to think about what applications need; how they could work differently; what would be the impact of using platform services; making use of a microservices-based approach*; could you even go further and re-architect to use so-called “serverless” computing* (e.g. Azure Functions or AWS Lambda)

But perhaps the most important point: digital transformation is not just about the IT – we need to re-design the business processes too if we’re really going to make a difference!


* I plan to explore these concepts in more detail in future blog posts.

Preparation notes for ITIL Foundation exam: Part 2 (service strategy)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days ago I published the first in a series of preparation notes as I study for my IT Infrastructure Library (ITIL®) Foundation certification. Part 1 was an overview/introduction. This post continues by looking at the topic of the first stage in the ITIL service lifecycle: service strategy.

Service Strategy

“The purpose of the service strategy stage is to define the perspective, position, plans and patterns that a service provider needs to be able to execute in order to meet an organisation’s business outcomes”

“Strategy is a complex set of planning activities in which an organisation seeks to move from one situation to another in response to a number of internal and external variables.”

Henry Mintzberg defined 5 Ps of strategy. ITIL uses 4:

  • Perspective – view of selves, vision and direction.
  • Position – where we are in relation to the rest of the market.
  • Plans – details for supporting and enhancing perspective and position.
  • Patterns – define the conditions and actions that need to be in place, and need to be repeatable to meet the objectives of the organization.
  • Ploy is the 5th P, not used in ITIL.

Customer perception and preferences drive expectations. Quality of product, ambience, etc. affect service value.

Utility + Warranty = Value

  • Utility is about “fit for purpose” – support performance, remove constraints.
  • Warranty is about fit for use – availability, continuity, capacity, security.

ITIL defines the following Service Strategy processes, which are expanded upon in the rest of this post:

    • Service portfolio management.
    • Business relationship management.
    • Financial management.

There are two more processes that are not discussed at ITIL foundation level:

    • Demand management.
    • IT strategy.

Service Packages, Service Level Packages and Service Assets

Service packages are the things that we offer to our customers:

  • The Core Service Package provides what the customer desires – the basic outcome.
  • Enabling services allow the core service to be provided.
  • Enhancing services are the extras that aren’t necessary to deliver the core service but make the core service more exciting or enticing.

Sometimes customers want more warranty or more utility – Service Level Packages provide choice:

  • Warranty:
    • Availability levels.
    • Continuity levels.
    • Capacity levels.
    • Security levels.
  • Utility:
    • Features of the service.
    • Support of the service.

Levels will relate to price.

Service Assets:

“A resource or capability used in the provision of services in order to create value in the forms of goods and services for the customer.”

  • Resources are tangible (e.g. inputs to a process) – boxes, method of taking payment, etc.
  • Capabilities are intangible – people with knowledge, behaviours, processes, etc.

Service Portfolio Management

Service Portfolio Management is about managing the service portfolio – not the services, processes, assets, etc.:

“To ensure that the service provider has the right mix of services and the investment in IT with the ability to manage or meet your business outcomes.”

Service portfolio is a database of services managed by a service provider – the complete set of services, including:

  • Service Pipeline – IT services of the future; being developed, not available to customers (yet).
  • Service Catalogue – database or document of current IT services – customer-facing or support services (internal IT).
  • Retired services – all IT services that are no longer useful for customers or internal organisation,

When looking at service portfolio, may need to consider service investments:

  • Transform the business (TTB): new market, ventures, high risk.
  • Grow the business (GTB): gain more customers, increase value, capture more market space (moderate risk).
  • Run the business (RTB): status quo, focus on core services (low risk).

Service portfolio management is about establishing or including services – 4 phases:

  • Define – document and understand existing and new services to make sure each has a documented business case.
  • Analyse – indicate if the service will optimise value and to understand supply and demand.
  • Approve – make sure that we have enough money to offer this service (retain, rebuild, refactor, renew or retire).
  • Charter – authorise the work to begin on a project.

Financial Management

Balances requirements between IT and business requirements: balance cost of service against the quality of the service. Looking for highest quality for lowest cost.

  • Budgeting – about the future (for things in pipeline, and for current services).
  • IT Accounting – make sure we know where we are spending our money. Group costs to see what part of service provision is costing most, rebalance/adjust as necessary “the process responsible for identifying the actual cost pf delivering the IT services compared with the costs in managing variance”. EV-AC=CV (expected value – accumulated cost = current value).
  • Chargeback – charge customers for use of IT services. Notional accounting where no money changes hands (e.g. internal showback).

Service valuation is about charging an appropriate price for a service based upon costs.

Business case to justify costs, benefits, risks, etc. with methods, assumptions, business impact, etc. (or a business impact analysis – BIA – i.e. what is the implication if we don’t offer the service).

Business Relationship Management

“To establish and maintain a business relationship between the service provider and the customer based on understanding the customer and their business needs.”

“To identify customer needs and ensure that the service provide is able to meet these needs as business needs change over time and between circumstances. BRM ensures that the service provider understands the changing needs. BRM also assists the business in articulating the value of a service. Put another way, BRM ensures that customer expectations to not exceed what they are willing to pay for and that the service provider is able to meet the customer’s expectations before agreeing to deliver the service.”

Service Level Management deals with service level agreements (SLAs) cf. BRM is concerned with understanding business needs – goal is customer satisfaction for both but measured in different ways. For BRM measured in improvement or recommendation of services.  BRM is about a positive relationship and defines the service catalogue.

Different types of relationship:

  1. Internal service provider.
  2. Shared service provider – to more than one business unit (still internal).
  3. External service provider – to external customers. Business Relations Manager talks to clients.

Business Relationship Manager will use:

  • Customer portfolio – database of information about customers and potential customers who will consume a service (commitments, investments, etc.).
  • Customer agreement portfolio – database of current contracts with clients (services delivering, with SLAs).

Activities include:

  • Marketing, selling and delivery.
  • Working with Service Portfolio Management to properly respond to customer requirements.


The next post in this series will follow soon, looking at service design.

These notes were written and published prior to sitting the exam (so this post doesn’t breach any NDA). They are intended as an aid and no guarantee is given or implied as to their suitability for others hoping to pass the exam.

ITIL® is a registered trademark of Axelos limited.

Why are the Microsoft Azure datacentre regions in Europe named as they are?

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m often asked why Microsoft’s datacentre regions for Azure in Europe are named as they are:

  • The Netherlands is “West Europe”
  • Ireland is “North Europe”
  • (then there’s country-specific regions in UK and Germany too…)

But Ireland and the Netherlands are (sort of) on the same latitude. So how does that work?

MVP Martina Grom explains it like this:

I suspect the backstory behind the naming is more down to the way that the United Nations divides Europe up for statistical purposes [source: Wikipedia article on Northern Europe]:

Europe subregion map UN geoscheme

On this map, North Europe is the dark blue area (including the UK and Ireland), whilst West Europe is the cyan area from France across to Austria and Germany (including the Benelux countries).

It makes more sense when you see it like this!

Image credit: Kolja21 via Wikipedia (used under a Creative Commons Attribution 3.0 licence).

A diagrammatical approach to modelling technology strategy

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I wrote about the architectural work my team is doing to standardise the way we deploy technology. But not everything I do for my customers is a “standard” solution. I also get involved in strategic consulting, stepping away from the solution and looking at the business challenges an organisation is facing, then looking at how Microsoft technologies can be applied to address those issues. Sometimes that’s in the form of one of our core transformational propositions and sometimes I’m asked to deliver something more ad-hoc.

One such occasion was a few weeks ago when I worked with a customer to augment their IT team, which doesn’t include an architecture capability. They are preparing for an IT transformation programme, triggered by an office move but also by a need for flexibility in a rapidly growing business. As they’ve matured their IT Service Management approach, they’ve begun to think about technology strategy too, and they wanted some help to create a document, based on a template that had been provided by another consultant.

I like this kind of work – and I’m pretty pleased with the outcome too. What we came up with was a pictorial representation of the IT landscape with a current state (“as-is”) view on the left, a future state (“to be”) view on the right, and a transformational view of priorities and dependencies (arranged into “swim-lanes”) in the centre.

I’m sure many organisations have similar approaches, but this really was a case of a picture is worth a thousand words. In this case, the period covered was short – just two years – but I also suggested we should add another view on the right, showing the target state further out, to give a view of the likely medium-term position (e.g. 5 years).

Each representation of the IT landscape has a number of domains/services (which will eventually relate to service catalogue items, once defined), and within each service are the main components, colour coded as follows:

  • Red (Retire): components that exist but which should be retired (for example the technologies used have reached the end of their lifecycle). These must not be used in new solutions.
  • Amber (Tolerate): components that exist but for which the supporting technologies are reaching the end of their lifecycle or are not strategic. These may only be used in new solutions with approval from senior IT management (i.e. the Head of IT – or the Chief Technology Officer, if there is one).
  • Green (Mainstream): These are the core building blocks for new solutions that are put in place.
  • Blue (Emerging): These are the components and technologies that are being considered, being implemented, or are expected to become part of the landscape within the period being modelled.

It’s important to recognise that this view of the technology strategy is just a point-in-time snapshot. It can’t be left as a static document and needs to be reviewed periodically. Even so, it gives some guidance from which to generate a plan of activities, so that the target vision can become reality.

I’m sure it’s not new, but it would be good to know the origin of this approach if anyone has used something similar in the past!

Migrating Azure virtual machines from ASM (Classic) to ARM (Resource Manager)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was recently discussing Azure infrastructure services with a customer who has implemented a solution based on Azure Service Manager (ASM – also known as classic mode) but is now looking to move to Azure Resource Manager (ARM).

Moving to ARM has some significant benefits. For a start, we move to declarative, template-driven deployment (infrastructure as code). Under ASM we had programmatic infrastructure deployment where we wrote scripts to say “Dear Azure, here’s a list of everything I want you to do, in excruciating detail“ and deployment ran in serial. With ARM we say “Dear Azure, here’s what I want my environment to look like – go and make it happen” and, because Azure knows the dependencies (they are defined in the template), it can deploy resources in parallel:

  • If a resource is not present, it will be created.
  • If a resource is present but has a different configuration, it will be adjusted.
  • If a resource is present and correctly configured, it will be used.

ASM is not deprecated, but new features are coming to ARM and they won’t be back-ported. Even Azure AD now runs under ARM (one of the last services to come across), so there really is very little reason to use ASM.

But what if you already have an ASM infrastructure – how do you move to ARM? Christos Matskas (@christosmatskas) has a great post on options for migrating Azure VMs from ASM (v1) to ARM (v2) which talks about four methods, each with its own pros and cons:

  • ASM2ARM script (only one VM at a time; requires downtime)
  • Azure PowerShell and/or CLI (can be scripted and can roll-back; caveats and limitations around migrating whole vNets)
  • MigAz tool (comprehensive option that exports JSON templates too; some downtime required)
  • Azure Site Recovery (straightforward, good management; vs. setup time and downtime to migrate)

Full details are in Christos’ post, which is a great starting point for planning Azure VM migrations.

Preparation notes for ITIL Foundation exam: Part 1 (overview/introduction)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Being able to “talk service” is an important part of the role of an IT architect so I’m currently studying for my IT Infrastructure Library (ITIL®) Foundation certification.

At the time of writing, I haven’t yet sat the exam (so this post doesn’t breach any NDA) but the notes that follow were taken as I studied.

About ITIL

There are various ITIL introductions on the Internet – this video is the one from Axelos, the joint venture between Capita and the UK Cabinet Office, who administer ITIL (and PRINCE) and it’s really just an advert…


For a better understanding of what ITIL is, try this:


ITIL certification is available at several levels (foundation, practitioner, intermediate, expert and master) with foundation being the entry level. The ITIL Foundation syllabus details the information that candidates are expected to demonstrate to be successful in the exam.

Axelos has published three “top tips” articles around the ITIL Foundation certification:

There are also sample exam papers available as well as an Official ITIL Exam app available for testing your knowledge.

ITIL Overview

IT Service Management is:

“The implementation and management of quality IT services that meet the needs of the business. IT service management is performed by IT service providers, through an appropriate mix of people, process and information technology.”

ITIL is an IT Management framework showing best practices for IT Service Management. It’s a library of 5 books (and other resources):

  • Service Strategy.
  • Service Design.
  • Service Transition.
  • Service Operation.
  • Continual Service Improvement (CSI).

There is also complementary guidance (e.g. release control and validation) – broken out by industry vertical (e.g. health, government) or by technology architecture (e.g. cloud, networking, software).

Basic terminology

  • Baselines – starting points/reference point. Used to look back or get back:
    • ITSM Baseline – measure service improvement plan (SIP).
    • Configuration Baseline – used for remediation/back-out from change.
    • Performance Baseline – response before made service improvement.
  • Business case – justification for spending money (planning tool):
    • Costs.
    • Benefits.
    • Risk.
    • Potential problems.
  • Capabilities – ability to carry out activities (functions and processes)
  • Functions – team of people and tools carrying out activities:
    • Who will do something – e.g.:
      • Service Desk.
      • Technical Management.
      • Application Management.
      • IT Operations Management.
  • IT Service Management – capabilities providing value to customers in the form of services.
  • Process – co-ordinated activities to produce an outcome that provides value:
    • How to do something.
  • Process Owner vs. Manager
    • Owner is responsible and accountable for making sure process does what it should.
    • Manager is responsible for operational management of the process (reports to the owner).
  • Resources:
    • IT infrastructure.
    • People.
    • Money.
    • Tangible assets (used to deliver service, cf. capabilities which are intangible).
  • Service – means to deliver value
    • Manage costs on the provider side whilst delivering value to the customer.
    • Service will have a service strategy.
  • Service owner – responsible for delivering the service. Also responsible for CSI.

ITSM and Services

  • Organisation has strategic goals, objectives
  • Core business processes are the activities that produce results for the organisation (reliant on vision)
  • IT service organisation – to execute on core business processes.
  • IT service management – repeatable, managed and controlled processes to deliver service
  • IT technical – computers, networking, etc.

Each layer supports the levels above.


“A means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs or risks”

Processes and Functions


“A structured set of activities designed to accomplish a specific objective. A process takes one or more defined inputs and turns them into defined outputs.”

  • Trigger.
  • Activity.
  • Dependency.
  • Sequence.


  1. Are measured.
  2. Have specific results.
  3. Meet expectations.
  4. Trigger change.

Processes have practitioners, managers and owners (accountable for making sure the process is fit for purpose, including definition of the process).


“Grouping of roles that are responsible for performing a defined process or activity.”

  • Service Desk.
  • Technical Management.
  • Applications Management.
  • Facilities Management.
  • IT Operations Control.

Functions interact and have dependencies.

Responsibility Assignment Matrix (RAM chart) – e.g. RACI:

  • Responsible.
  • Accountable.
  • Consult.
  • Inform.

Map processes and roles.

ITIL Service Lifecycle

Earlier versions of ITIL used to focus on the processes. Version 3 focuses on why those processes are necessary. For foundation level, candidates need to know the objectives, rather than the detail.

ITIL v3 lifecycle

The idea to provide a service will be conceptualized, then it will be designed, it will transition, and be maintained through operation. Always looking for ways to improve a process or service for customers (and deliver more value).

Services will be born and also retired (death).

The service catalogue details the things on offer, together with service levels.

The table below shows the processes for each stage within the ITIL service lifecycle:

Service strategy Service design Service transition Service operation CSI
  • Business relationship management
  • Service portfolio management
  • Financial management
  • Demand Management*
  • IT Strategy*

* Not required at foundation level

  • Design coordination
  • Supplier management
  • Information security management
  • Service catalogue management
  • IT service continuity
  • Availability management
  • Capacity management
  • Service level management
  • Transition planning
  • Knowledge management
  • Release and deployment management
  • Service asset and configuration management
  • Change management
  • Change Evaluation*
  • Service Validation and Testing*

* Not discussed in detail at foundation level


  • Event management
  • Request fulfilment
  • Access management
  • Problem management
  • Incident management


  • Service desk
  • IT operations
  • Application management
  • Technical management
 7 step improvement plan

The next post in this series will follow soon, looking at service strategy.

These notes were written and published prior to sitting the exam (so this post doesn’t breach any NDA). They are intended as an aid and no guarantee is given or implied as to their suitability for others hoping to pass the exam.

ITIL® is a registered trademark of Axelos limited.