I was recently discussing Azure infrastructure services with a customer who has implemented a solution based on Azure Service Manager (ASM – also known as classic mode) but is now looking to move to Azure Resource Manager (ARM).
Moving to ARM has some significant benefits. For a start, we move to declarative, template-driven deployment (infrastructure as code). Under ASM we had programmatic infrastructure deployment where we wrote scripts to say “Dear Azure, here’s a list of everything I want you to do, in excruciating detail“ and deployment ran in serial. With ARM we say “Dear Azure, here’s what I want my environment to look like – go and make it happen” and, because Azure knows the dependencies (they are defined in the template), it can deploy resources in parallel:
If a resource is not present, it will be created.
If a resource is present but has a different configuration, it will be adjusted.
If a resource is present and correctly configured, it will be used.
ASM is not deprecated, but new features are coming to ARM and they won’t be back-ported. Even Azure AD now runs under ARM (one of the last services to come across), so there really is very little reason to use ASM.
For a better understanding of what ITIL is, try this:
Certification
ITIL certification is available at several levels (foundation, practitioner, intermediate, expert and master) with foundation being the entry level. The ITIL Foundation syllabus details the information that candidates are expected to demonstrate to be successful in the exam.
Axelos has published three “top tips” articles around the ITIL Foundation certification:
“The implementation and management of quality IT services that meet the needs of the business. IT service management is performed by IT service providers, through an appropriate mix of people, process and information technology.”
ITIL is an IT Management framework showing best practices for IT Service Management. It’s a library of 5 books (and other resources):
Service Strategy.
Service Design.
Service Transition.
Service Operation.
Continual Service Improvement (CSI).
There is also complementary guidance (e.g. release control and validation) – broken out by industry vertical (e.g. health, government) or by technology architecture (e.g. cloud, networking, software).
Basic terminology
Baselines – starting points/reference point. Used to look back or get back:
ITSM Baseline – measure service improvement plan (SIP).
Configuration Baseline – used for remediation/back-out from change.
Performance Baseline – response before made service improvement.
Business case – justification for spending money (planning tool):
Costs.
Benefits.
Risk.
Potential problems.
Capabilities – ability to carry out activities (functions and processes)
Functions – team of people and tools carrying out activities:
Who will do something – e.g.:
Service Desk.
Technical Management.
Application Management.
IT Operations Management.
IT Service Management – capabilities providing value to customers in the form of services.
Process – co-ordinated activities to produce an outcome that provides value:
How to do something.
Process Owner vs. Manager
Owner is responsible and accountable for making sure process does what it should.
Manager is responsible for operational management of the process (reports to the owner).
Resources:
IT infrastructure.
People.
Money.
Tangible assets (used to deliver service, cf. capabilities which are intangible).
Service – means to deliver value
Manage costs on the provider side whilst delivering value to the customer.
Service will have a service strategy.
Service owner – responsible for delivering the service. Also responsible for CSI.
ITSM and Services
Organisation has strategic goals, objectives
Core business processes are the activities that produce results for the organisation (reliant on vision)
IT service organisation – to execute on core business processes.
IT service management – repeatable, managed and controlled processes to deliver service
IT technical – computers, networking, etc.
Each layer supports the levels above.
Services:
“A means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs or risks”
Processes and Functions
Processes:
“A structured set of activities designed to accomplish a specific objective. A process takes one or more defined inputs and turns them into defined outputs.”
Trigger.
Activity.
Dependency.
Sequence.
Processes:
Are measured.
Have specific results.
Meet expectations.
Trigger change.
Processes have practitioners, managers and owners (accountable for making sure the process is fit for purpose, including definition of the process).
Functions:
“Grouping of roles that are responsible for performing a defined process or activity.”
Service Desk.
Technical Management.
Applications Management.
Facilities Management.
IT Operations Control.
Functions interact and have dependencies.
Responsibility Assignment Matrix (RAM chart) – e.g. RACI:
Responsible.
Accountable.
Consult.
Inform.
Map processes and roles.
ITIL Service Lifecycle
Earlier versions of ITIL used to focus on the processes. Version 3 focuses on why those processes are necessary. For foundation level, candidates need to know the objectives, rather than the detail.
The idea to provide a service will be conceptualized, then it will be designed, it will transition, and be maintained through operation. Always looking for ways to improve a process or service for customers (and deliver more value).
Services will be born and also retired (death).
The service catalogue details the things on offer, together with service levels.
The table below shows the processes for each stage within the ITIL service lifecycle:
Service strategy
Service design
Service transition
Service operation
CSI
Business relationship management
Service portfolio management
Financial management
Demand Management*
IT Strategy*
* Not required at foundation level
Design coordination
Supplier management
Information security management
Service catalogue management
IT service continuity
Availability management
Capacity management
Service level management
Transition planning
Knowledge management
Release and deployment management
Service asset and configuration management
Change management
Change Evaluation*
Service Validation and Testing*
* Not discussed in detail at foundation level
Processes:
Event management
Request fulfilment
Access management
Problem management
Incident management
Functions:
Service desk
IT operations
Application management
Technical management
7 step improvement plan
The next post in this series will follow soon, looking at service strategy.
These notes were written and published prior to sitting the exam (so this post doesn’t breach any NDA). They are intended as an aid and no guarantee is given or implied as to their suitability for others hoping to pass the exam.
ITIL® is a registered trademark of Axelos limited.
I’m not massively into collecting and curating digital video content – I have some family movies, and I stream content from BBC iPlayer, Amazon Video, etc. – pretty normal stuff. Even so, there are times that I think I could use the tech available to me in a better way – and there are times when I find I can do something that I didn’t previously know about!
Today was one of those days, whilst I was studying for an exam and I wanted to watch some videos. I wanted to be able to watch the videos in the comfort of my living room instead of on a PC and I was sure there must be a way. I had copies on my Synology NAS but, somewhat frustratingly, the Plex media server wasn’t picking them up (and I wanted to be watching the videos, not playing with Plex!).
Then, when I right-clicked on a video file in Windows Explorer, I spotted an option to “Cast to Device” which included options for my Samsung TV and also my Bose speakers – though I think the choices will depend on the Digital Living Network Alliance (DLNA) devices that are available on the local network. I selected the TV and found I could create a playlist of videos to watch in the comfort of my sofa – and, even better, the TV remote can be used to pause/resume playback (the PC was in a different room).
Now I’m studying in comfort (well, maybe not – I gave up the sofa and lay on the floor with another PC to take notes!) and streaming media across the home network using Windows and DLNA.
I got a bit of a surprise in my email recently, when I saw that someone had nominated this blog for the UK Blog Awards 2017. That’s a nice touch – after 14 years and well over 2000 posts (some that even I now regard as drivel and some that people find useful), it’s exactly the kind of feedback that keeps me going!
The site has no marketing team (just me), no social media campaign (just my website and my Twitter feed @markwilsonit) – and now it’s the public vote where I’m up against all of the other entrants in the Digital and Technology category vying for a place on the shortlist of 8 blogs.
IT architecture is a funny old game… you see, no-one does it the same way. Sure, we have frameworks and there’s a lot of discussion about “how” to “architect” (is that even a verb?) but there is no defined process that I’m aware of and that’s broadly adopted.
A few years ago, whilst working for a large systems integrator, I was responsible for parts of a technology standardisation programme that was intended to use architecture to drive consistency in the definition, design and delivery of solutions. We had a complicated system of offerings, a technology strategy, policies, architectural principles, a taxonomy, patterns, architecture advice notes, “best practice”, and a governance process with committees. It will probably come as no surprise that there was a fair amount of politics involved – some “not invented here” and some skunkworks projects with divisions defining their own approach because the one from our CTO Office “ivory tower” didn’t fit well.
I’m not writing this to bad-mouth a previous employer – that would be terribly bad form – but I honestly don’t believe that the scenario I’ve described would be significantly different in any large organisation. Politics is a fact of life when working in a large enterprise (and some smaller ones too!). And what we created was, at its heart, sound. I might have preferred a different technical solution to manage it (rather than a clunky portfolio application based on SharePoint lists and workflow) but I still think the principles were solid.
Fast-forward to 2016 and I’m working in a much smaller but rapidly-growing company and I’m, once again, trying to drive standardisation in our solutions (working with my peers in the Architecture Practice). This time I’m taking a much more lightweight approach and, I hope, bringing key stakeholders in our business on the journey too.
We have:
Standards: levels of quality or attainment used as a measure or model. These are what we consider as “normal”.
Principles: fundamental truths or propositions that serve as a foundation for a system or behaviour. These are the rules when designing or architecting a system – our commandments.
We’ve kept these simple – there are a handful of standards and around a dozen principles – but they seem to be serving us well so far.
Then, there’s our reference architecture. The team has defined three levels:
An overall reference model that provides a high level structure with domains around which we can build a set of architecture patterns.
The technical architecture – with an “architecture pattern” per domain. At this point, the patterns are still technology-agnostic – for example a domain called “Datacentre Services” might include “Compute”, “Storage”, “Location”, “Scalability” and so on. Although our business is purely built around the Microsoft platform, any number of products could theoretically be aligned to what is really a taxonomy of solution components – the core building blocks for our solutions.
“Design patterns” – this is where products come into play, describing the approach we take to implementing each component, with details of what it is, why it would be used, some examples, one or more diagrams with a pattern for implementing the solution component and some descriptive text including details such as dependencies, options and lifecycle considerations. These patterns adhere to our architectural standards and principles, bringing the whole thing full-circle.
It’s fair to say that what we’ve done so far is about technology solutions – there’s still more to be done to include business processes and on towards Enterprise Architecture but we’re heading in the right direction.
I can’t blog the details here – this is my personal blog and our reference architecture is confidential – but I’m pleased with what we’ve created. Defining all of the design patterns is laborious but will be worthwhile. The next stage is to make sure that all of the consulting teams are on board and aligned (during which I’m sure there will be some changes made to reflect the views of the guys who live and breathe technology every day – rather than just “arm waving” and “colouring in” as I do!) – but I’m determined to make this work in a collaborative manner.
Our work will never be complete – there’s a balance to strike between “standardisation” and “innovation” (an often mis-used word, hence the quotation marks). Patterns don’t have to be static – and we have to drive forward and adopt new technologies as they come on stream – not allowing ourselves to stagnate in the comfort of “but that’s how we’ve always done it”. Nevertheless, I’m sure that this approach has merit – if only through reduced risk and improved consistency of delivery.
Where is the hash key? (essential for tweeting!) – Ctrl+Alt+3 creates a lovely # (no, it’s not a “pound” – a pound is £, unless we’re talking about weight, when it’s lb).
And, talking of currency, a Euro symbol (€) is Right Alt+4 (just as in Windows), not on the 2 key (as printed on my keyboard).
The trainer I’ve gone for is a Tacx Vortex and it’s proved pretty easy to set up. Ideally I’d use a spare wheel with a trainer tyre but I don’t have one and my tyres are already looking a bit worn – I may change them when the bike comes out again in the spring, meaning I can afford to wear them out on the trainer first! All I had to do was swap out my quick release skewer for the one that comes with the trainer and my bike was easily mounted.
Calibration was a simple case of using the Tacx Utility app on my iPhone – which finds the trainer via Bluetooth and can also be used for firmware upgrades (it’s available for Android too). All you have to do is cycle up to a given speed and away you go!
I found that the Tacx Utility would always locate my trainer but the Tacx Cycling app was less reliable. Ultimately that’s not a problem because I use the Zwift virtual cycling platform (more on that in a moment) and the Zwift Mobile Link app will allow the PC to find my trainer via Wi-Fi and Bluetooth. There is one gotcha though – on the second time I used the trainer I spent a considerable time trying to get things working with Zwift. In the end I found that:
The Tacx apps couldn’t be running at the same time as Zwift Mobile Link.
My phone had a tendency to roam onto another Wi-Fi network (the phone and the PC have to be on the same network for the mobile link to work).
My Bose Soundlink Mini II speakers were also interfering with the Bluetooth connection so if I wanted to listen to music whilst cycling then a cable was needed!
I’m guessing that none of this would be an issue if I switched to ANT+ – as my Garmin Edge 810 does. The trick when using the Garmin is to go into the bike profile and look for sensors. Just remember to turn GPS off when using it on a stationary bike (or else no distance is recorded). Also, remember that:
“[…] when doing an indoor activity or when using devices that do not have GPS capability, the [Garmin Speed and Cadence Sensor] will need to be calibrated manually by entering a custom wheel size within the bike profile to provide accurate speed and distance.” [Garmin Support]
And, talking of ANT+ – one thing I couldn’t work out before I bought my trainer was whether I needed to buy an ANT+ dongle for Zwift? Well, the answer is “No”! as the Zwift Mobile Link app works beautifully as a bridge on my trainer – it’s worth checking out the Zwift website to see which trainers work with the platform though (and any other gear that may be required).
I’ll probably write another post about Zwift but, for now, check out:
In the meantime, it’s worth mentioning that I started out riding on a 14 day/50km trial. I was about to switch to a paid subscription but I found out Strava Premium members get 2 months’ Zwift free* and, as that’s half the price of Zwift, I’ve upgraded my Strava for a couple of months instead!
So, with the trainer set up in the garage (though it’s easy to pop the bike off it if we do have some winter sunshine), I can keep my miles up through the Winter, which should make the training much, much easier in the Spring – that’s the idea anyway!
*It now looks as though the Strava Premium-Zwift offer has now been limited to just November and December 2016 – though I’m sure it will come around again!
18 months ago, I joined the team at risual – and I’ve been meaning to write this blog post since, well, since just about the time I joined the team at risual!
In previous roles, I’ve used a mish-mash of communications systems that, despite the claims of Cisco et al have been anything but unified. For example, using Lync for IM and presence, with a hokey-client bridge called CUCILync to pass through presence information etc. from the Cisco Call Manager PBX; then a separate WebEx system for conferencing – but with no ability to host PSTN and VoIP callers in the same conference (either set it up as VoIP or as PSTN). Quite simply, there was too much friction.
Working for a Microsoft-only consultancy means that I use tools from one vendor – Microsoft. That means Skype for Business (formerly Lync) is my one-stop-shop for instant messaging, presence, desktop sharing, web conferencing, etc. and that it’s integrated with the PSTN and with Exchange (for voicemail and missed call notifications).
The one fly in the ointment is that my company mobile phone is on the EE network. EE’s 4G coverage is fast in urban areas but the GPRS signal for calls can be pretty poor and the frequencies used mean that the signals don’t pass through walls very well for indoor coverage. One of the worst locations is my home, which is my primary work location when I’m not required to be consulting on a customer site.
Here, the flexibility of my Skype for Business setup helps out:
By setting a divert on my company mobile phone to a DDI that routes the call to work, I can let Skype for Business simultaneously call any Skype for Business applications running on my devices and the PSTN number for my personal phone (which, unsurprisingly, is on a network that works at home!). Then, I can answer as a PSTN call or as a VoIP call (depending on whether I have my headset connected!). If I can’t take the call, then a missed call notification or voicemail ends up in my Inbox, including contact details if the caller was in the company directory.
Ever-so-occasionally, the call is diverted to my personal voicemail, rather than transferring back to work and into Exchange, but that only seems to happen if I’m already engaged on the mobile. 90% of the time, my missed calls and voicemails end up in my Inbox – where they are processed in line with my email.
With new options for hosting Skype for Business Online (in the Microsoft cloud, as part of Office 365), it’s shaping up to be a credible alternative to more expensive systems from Cisco, Avaya, etc. and as an end user I’m impressed at the flexibility it offers me. What’s more, based on the conversations I have with my clients, it seems that the “but is Lync really a serious PBX replacement?” stigma is wearing off…
For as long as I can remember, I’ve had a selection of PCs (Windows, Mac or Linux) in the house running a variety of operating systems. The Windows machines come and go – they are mostly laptops provided for work (either mine or my wife’s) – although we also have a Lenovo Flex 15 as “the family PC” (in reality, it’s difficult to get near it most of the time as the kids are using it!). Linux is normally for me to do something geeky on – whether that’s one of the Raspberry Pis or an old netbook running Ubuntu to easily update an Arduino, etc. The Mac purchases require a bit more consideration – their premium price means that it’s not something to go into without a great deal of thought and, although I still regret selling my Bondi Blue G3 iMac (one of the originals), I have 2006 and 2012 Mac Minis, and a late-2007 MacBook.
2006 Mac Mini running Windows 10!
Earlier this year, I brought the 2006 Mac Mini back to life with a SSD upgrade and, although it’s not “supported”, I managed to install Windows 10 on it (actually, I installed Windows 7 via BootCamp, then updated). It’s working a treat and, although it only has 2GB of RAM, it’s fine for a bit of web browsing, social media, scanning documents, etc. The only thing I haven’t been able to get Windows to recognise is my external iSight camera – which is a great device but has long since been discontinued. I had some challenges along the way (and I can’t find all of the details for the process I used now) but some of the links I found useful include:
I also found that my aluminium Apple keyboard (wired) wouldn’t work for startup options; however, if I plugged in an older Apple White Pro keyboard, I was able to use startup options! I later found a forum post (when I was writing this blog post, but not when I originally had the issue) which suggests that a firmware update will fix the issue with the aluminium keyboard.
Once Windows 7 was installed on the Mac, it was just a case of following the Windows 10 upgrade process (back when Windows 10 was still a free upgrade).
Late 2007 MacBook destined for the scrap heap
The MacBook has been less successful. Not only has the keyboard rest broken yet again (for a third time) and the replacement battery that’s only had around 90 charges is completely dead after a couple of years of not being used, but it seems the latest supported Mac OS X version is 10.7.5 (Lion). I had hoped to bring it out of hibernation for use in the garage with Zwift but that needs at least OS X 10.8, leaving me waiting for an iOS app for Zwift (it’s on the way), or borrowing the family PC from the kids when I jump on the turbo trainer. Regardless, with no battery and an ancient OS, it looks like this MacBook is about to go to PC heaven…
2012 Mac Mini going strong but watch the updates…
The 2012 Mac Mini running OS X 10.10 (Yosemite) is still supported and I’m considering installing macOS 10.12 (Sierra) on it. I say considering, because that looks likely to force me to spend money on a Lightroom 6 upgrade (with Lightroom 7 just around the corner, based on the fact that we’re up to 6.7 now). I also skipped OS X 10.11 (El Capitan) which I now regret, because that means it’s not in my purchase history so I can’t download it if I ever need an older MacOS version.
I recently wanted to use a graphic of all the current Microsoft datacentre regions for a customer presentation (to be delivered in Microsoft’s offices, in partnership with Microsoft, so copyright was not a concern). The Azure website has a suitable graphic but there are elements that hide some of the content and saving the picture just saves the overlay with the locations, not the map behind.
I’d seen my sons using developer tools in a browser to change the colours on the page – and that gave me an idea… what if I used the developer tools in my browser to turn off elements, one by one, until I got back to the underlying graphic?
So, by right clicking on one of the elements I wanted to remove and choosing Inspect Element, I was able to view the associated code, delete a <div> or two and peel back the layers.
After that, I was a copy-and-paste away from the graphic I needed to add to my presentation.