Weeknote 19/2020: Azure exam study, remote working, and novelty video conference backgrounds

Another week in the furloughed fun house…

Studying

I still have a couple of exams I’d like to complete this month. I’ve been procrastinating about whether to take the Microsoft Azure Architect exams in their current form (AZ-300/301) or to wait for the replacements (AZ-303/304). As those replacements have been postponed from late April until the end of June (at least), I’ve booked AZ-300/301 and am cramming in lots of learning, based on free training from the Microsoft Learn website.

I’m sure it’s deeper (technically) than I need for an Architect exam, but it’s good knowledge… I just hope I can get through it all before the first exam appointment next Thursday evening…

Thoughts on remote working during the current crisis

I’ve seen this doing the rounds a couple of times on Twitter and I don’t know the original source, but it’s spot on. Words to live by in these times:

  1. You’re not “working from home”. You’re “At your home, during a crisis, trying to work” [whilst parenting, schooling, helping vulnerable people, etc.].
  2. Your personal physical, mental and emotional health is far more important than anything else right now.
  3. You should not try to compensate for lost productivity by working longer hours.
  4. You will be kind to yourself and not judge how you are coping based on how you see others coping.
  5. You will be kind to others and not judge how they are coping based on how you are coping.
  6. Your success will not [should not] be measured in the same way it was when things were normal.

This animation may also help…

Also, forget the 9-5:

As for returning to the office?

Video conference backgrounds

Novelty backgrounds for video conferences are a big thing right now. Here are a couple of collections that came to my attention this week:

Upgrades to the Zwift bike

My old road bike has been “retired” for a year now, living out its life connected to an indoor trainer and used for Zwifting. It’s needed some upgrades recently though…

I also realised why I struggled to do 90km on the road today… that was my fifth ride this week, on top of another 100km which was mostly off-road!

Weeknote 16/2020: new certifications, electronic bicycle gears, and a new geek TV series

Another week, another post with some of the things I encountered this week that might be useful/of interest to others…

Fundamentally certified

Last week, I mentioned I had passed the Microsoft Power Platform Fundamentals exam (and I passed the Microsoft Azure Fundamentals and Microsoft 365 Fundamentals exams several months ago). This week, I added Dynamics 365 Fundamentals to that list, giving me the complete set of Microsoft Fundamentals certifications.

That’s 3 exams in 7 “working” days since I was furloughed, so I think next week I’ll give the exams a bit of a rest, knock out some blog posts around the things I’ve learned and maybe play with some tech too…

Website move

Easter Monday also saw this website move to a new server. The move was a bit rushed (I missed some communications from my hosting provider) and had some DNS challenges, but we took the opportunity to force HTTPS and it seems a little more responsive to me too (though I haven’t run any tests). For a long time, I’ve been considering moving to Azure App Service – if only for reasons of geek curiosity – but the support I receive from my current provider means I’m pretty sure it will be staying put for the time being.

The intersection of cycling and technology

Those who follow me on Twitter are probably aware that for large parts of the year, I’m “Cyclist’s Dad”. At weekends in the autumn, I can usually be found in a muddy field somewhere (or driving to/from one), acting as pit crew, principal sponsor and Directeur Sportif for my eldest son – who loves to race his bike, with cyclo-cross as his favourite discipline.

This weekend, we should have been at Battle on the Beach (not technically cyclo-cross but still an off-road race) but that’s been postponed until the Autumn, for obvious reasons.

Instead, we’ve been having fun as my son upgraded his CX bike to electronic gears, using a Shimano Ultegra/GRX Di2 mix.

It’s all been his work – except a little help from Olney Bikes to swap over the bottom bracket (as I lack the tools for changing press-fit BBs) – and the end result is pretty spectacular (thanks also to Corley Cycles/@CorleyCycles with their help sourcing some brake hose inserts at short notice). I’ve never had the good fortune (or budget) for electronic shifting on my bikes but having ridden his yesterday (long story involving a mid-ride puncture on my bike) I was blown away by the difference that all the components he’s swapped to save weight have made and the smooth shifting. Oh yes, and it’s finished with a gold chain. I mean, who doesn’t need a gold chain on their bike?

Electronic shifting has its critics but first impressions, based on a couple of off-road rides this weekend, are very positive. Maybe I need to get a couple of newspaper delivery rounds to start saving for upgrades on my bikes…

TV

Right, it’s getting late now and Sunday night is a “school night” (especially true since my Furlough Leave is being spent focusing on learning and development). I’m off to watch an episode of the BBC’s new drama, “Devs”, before bed. I’m 4 episodes in now and it’s a bit weird but it’s got me hooked…

Weeknote 15/2020: a cancelled holiday, some new certifications and video conferencing fatigue

Continuing the series of weekly blog posts, providing a brief summary of notable things from my week.

Cancelled holiday #1

I should have been in Snowdonia this week – taking a break with my family. Obviously that didn’t happen, with the UK’s social distancing in full effect but at least we were able to defer our accommodation booking.

It has been interesting though, being forced to be at home has helped me to learn to relax a little… there’s still a never-ending list of things that need to be done, but they can wait a while.

Learning and development

Last week, I mentioned studying for the AWS Cloud Practitioner Essentials Exam and this week saw me completing that training before attempting the exam.

It was my first online-proctored exam and I had some concerns about finding a suitable space. Even in a relatively large home (by UK standards), with a family of four (plus a dog) all at home, it’s can be difficult to find a room with a guarantee not to be disturbed. I’ve heard of people using the bathroom (and I thought about using my car). In the end, and thanks to some advice from colleagues – principally Steve Rush (@MrSteveRush) and Natalie Dellar (@NatalieDellar) – as well as some help from Twitter, I managed to cover the TV and some boxes in my loft room, banish the family, and successfully pass the test.

With exam 1 under my belt (I’m now an AWS Certified Cloud Practitioner), I decided to squeeze another in before the Easter break and successfully studied for, and passed, the Microsoft Power Platform Fundamentals exam, despite losing half a day to some internal sales training.

In both cases, I used the official study materials from Amazon/Microsoft and, although they were not everything that was needed to pass the exams, the combination of these and my experience from elsewhere helped (for example having already passed the Microsoft Azure Fundamentals exam meant that many of the concepts in the AWS exam were already familiar).

Thoughts on the current remote working situation

These should probably have been in last week’s weeknote (whilst it wasn’t the school holidays so we were trying to educate our children too) but recently it’s become particularly apparent to me that we are not living in times of “working from home” – this is “at home, during a crisis, trying to work”, which is very different:

Some other key points I’ve picked up include that:

  • Personal, physical and mental health is more important than anything else right now. (I was disappointed to find that even the local Police are referring to mythical time limits on allowed exercise here in the UK – and I’m really lucky to be able to get out to cycle/walk in open countryside from my home, unlike so many.)
  • We should not be trying to make up for lost productivity by working more hours. (This is particularly important for those who are not used to remote working.)
  • And, if you’re furloughed, use the time wisely. (See above re: learning and development!)

Video conference fatigue

Inspired by Matt Ballantine’s virally-successful flowchart of a few years ago, I tried sketching something. It didn’t catch on in quite the same way, but it does seem to resonate with people.

In spite of my feelings on social video conferencing, I still took part in two virtual pub quizzes this week (James May’s was awful whilst Nick’s Pub Quiz continues to be fun) together with trans-Atlantic family Zooming over the Easter weekend…

Podcast backlog

Not driving and not going out for lunchtime solo dog walks has had a big impact on my podcast-listening…

I now need to schedule some time for catching up on The Archers and the rest of my podcasts!

Remote Work Survival Kit

In what spare time I’ve had, I’ve also been continuing to edit the Remote Work Survival Kit. It’s become a mammoth task, but there are relatively few updates arriving in the doc now. Some of the team have plans to move things forward, but I have a feeling it’s something that will never be “done”, will always be “good enough” and which I may step away from soon.

Possibly the best action film in the world…

My week finished with a family viewing of the 1988 film, “Die Hard”. I must admit it was “a bit more sweary” than I remembered (although nothing that my teenagers won’t already hear at school) but whilst researching the film classification it was interesting to read how it was changed from an 18 to a 15 with the passage of time

Weeknote 14/2020: Podcasting, furlough and a socially-distanced birthday

We’re living in strange times at the moment, so it seems as good as ever an opportunity to bring back my attempts to blog at least weekly with a brief precis of my week.

In the beginning

The week started as normal. Well, sort of. The new normal. Like everyone else in the UK, I’m living in times of enforced social distancing, with limited reasons to leave the house. Thankfully, I can still exercise once a day – which for me is either a dog walk, a run or a bike ride.

On the work front, I had a couple of conversations around potential client work, but was also grappling with recording Skills Framework for the Information Age (SFIA) skills for my team. Those who’ve known me since my Fujitsu days may know that I’m no fan of SFIA and it was part of the reason I chose to leave that company… but it seems I can’t escape it.

Podcasting

On Monday evening, I stood in for Chris Weston (@ChrisWeston) as a spare “W” on the WB-40 Podcast. Matt Ballantine (@Ballantine70) and I had a chat about the impact of mass remote working, and Matt quizzed me about retro computing. I was terrible in the quiz but I think I managed to sound reasonably coherent in the interview – which was a lot of fun!

Furlough

A few weeks ago, most people in the UK would never have heard of “Furlough Leave”. For many, it’s become common parlance now, as the UK Government’s Job Retention Scheme becomes reality for hundreds of thousands, if not millions of employees. It’s a positive thing – it means that businesses can claim some cash from the Government to keep them afloat whilst staff who are unable to work due to the COVID-19/Coronavirus crisis restrictions are sent home. In theory, with businesses still liquid, we will all have jobs to go back to, once we’re allowed to return to some semblance of normality.

On Tuesday, I was part of a management team drawing up a list of potentially affected staff (including myself), based on strict criteria around individuals’ current workloads. On Wednesday it was confirmed that I would no longer be required to attend work for the next three weeks from that evening. I can’t provide any services for my employer – though I should stay in touch and personal development is encouraged.

Social distancing whilst shopping for immediate and extended family

So, Thursday morning, time to shop for provisions: stock is returning to the supermarket shelves after a relatively small shift in shopping habits completely disrupted the UK’s “just in time” supply chain. It’s hardly surprising as a nation prepared to stay in for a few weeks, with no more eating at school/work, no pubs/cafés/restaurants, and the media fuelling chaos with reports of “panic buying”.

Right now, after our excellent independent traders (like Olney Butchers), the weekly town market is the best place to go with plenty of produce, people keeping their distance, and fresh air. Unfortunately, with a family of four to feed (and elderly relatives to shop for too), it wasn’t enough – which meant trawling through two more supermarkets and a convenience store to find everything – and a whole morning gone. I’m not sure how many people I interacted with but it was probably too many, despite my best efforts.

Learning and development

With some provisions in the house, I spent a chunk of time researching Amazon Web Services certifications, before starting studying for the AWS Cloud Practitioner Essentials Exam. It should be a six hour course but I can’t speed up/slow down the video, so I keep on stopping and taking notes (depending on the presenter) which makes it slow going…

I did do some Googling though, and found that a combination of Soundflower and Google Docs could be used to transcribe the audio!

I also dropped into a Microsoft virtual launch event for the latest Microsoft Business Applications (Dynamics 365 and Power Platform) updates. There’s lots of good stuff happening there – hopefully I’ll turn it into a blog post soon…

#NicksPubQuiz

Saturday night was a repeat of the previous week, taking part in “Nick’s Pub Quiz”. For those who haven’t heard of it – Nick Heath (@NickHeathSport) is a sports commentator who, understandably, is a bit light on the work front right now so he’s started running Internet Pub Quizzes, streaming on YouTube, for a suggested £1/person donation. Saturday night was his sixth (and my family’s second) – with over 1500 attendees on the live stream. Just like last week, my friend James and his family also took part (in their house) with us comparing scores on WhatsApp for a bit of competition!

Another year older

Ending the week on a high, Sunday saw my birthday arrive (48). We may not be able to go far, but I did manage a cycle ride with my eldest son, then back home for birthday cake (home-made Battenberg cake), and a family BBQ. And the sun shone. So, all in all, not a bad end to the week.

Preparation notes for Microsoft exam 70-534: Architecting Microsoft Azure Solutions


I’ve been preparing for Microsoft exam 70-534: Architecting Microsoft Azure Solutions. At the time of writing, I haven’t yet sat the exam (so this post doesn’t breach any NDA) but the notes that follow were taken as I studied.

Resources I used included:

  • Microsoft Association of Practicing Architects (MAPA) bootcamp (unfortunately the delivery suffered from issues with the streaming media platform and the practical labs are difficult to follow, partly due to changes in the platform).
  • Hands-on time with Azure – though the exam is still mostly based on the old Classic/Azure Service Manager (ASM) model, so I found myself going back to learn things in ASM that I do differently under Azure Resource Manager (ARM).
  • The Microsoft Press exam preparation book, which contains a lot more detail and is pretty readable (or it would be if I wasn’t trying to read it in PDF form – sometimes paperback books are better for flicking back and forward!).
  • A free Azure subscription (either sign up for a one-off £125 credit for a month, or you can get £20 each month for 12 months through Visual Studio Dev Essentials).

Other potentially-useful references include:

The rest of this post contains my study notes – which may be useful to others but will almost certainly not be enough to pass the exam (i.e. you’ll need to read around the topics too – the Azure documentation is generally very good).

Note that Microsoft Azure is a fast-moving landscape – these notes are based on studying the exam curriculum and may not be current – refer to the Azure documentation for the latest position.

Azure Networking

  • Virtual networks (VNets) are used to manage networking in Azure. Can only exist in one Azure region.
  • CIDR notation is used to describe networks.
  • Use different subnets to partition network – e.g. Internet-facing web servers from internal traffic; different environments.
  • Subnet has to be part of VNet range with no overlap.
  • All virtual machines (VMs) in a VNet can communicate (by default) but anything outside cannot talk (by default) – so VNet is default network boundary.
  • In ASM, every VM has an associated cloud service (with its own name @cloudapp.net). Without subnets the VMs can only communicate via a public IP. If multiple cloud services are on same VNet then VMs can communicate using private IP.
  • Endpoints are used to manage connections: internal (private) endpoint listening on a given port (e.g. for RDP on 3389); external (public) endpoint on defined port number – therefore go to a particular server, rather than just to the cloud service.
    • Public from anywhere on the Internet; private only within the cloud service/VNet.
  • Dynamic IP (DIP) is the private IP associated with a VM; only resolvable inside the VNet – external access needs a public IP. Can chose an IP address to use – and will be reserved.
  • Virtual IP (VIP) – assigned to a cloud service – static public IP for as long as at least one VM running inside the cloud service.
  • Instance Level Public IP (ILPIP) – for direct connection to Azure VM from Internet (not via the cloud service); public IP attached to a VM. In this configuration, whatever ports open on the VM are open to the Internet – effectively bypassing the security of the VNet.
  • Use a VNet-to-VNet VPN to create a tunnel between VNets in different regions. This extends VNets to appear as if they were one.
  • Site-to-site VPN to create tunnel between on-premises network and Azure VNet. Uses persistent hardware on-premises.
  • Point-to-site VPN to create tunnel from individual computers to an Azure VNet. Software-based.
  • Multi-site VPN is a combination of the other methods, combined.
  • Azure ExpressRoute avoids routing via ISP – effectively a dedicated link from customer datacentre to Azure region, bypassing ISP. High throughput, low-latency and no effect on Internet link).
    • ExpressRoute Providers provide point-to-point Ethernet of connect via a cloud exchange. BGP sessions with edge routers on customer site. 200Mbps/500Mbps/1Gbps/10Gbps.
    • Can use for Azure computing (IaaS); Azure public services (web apps, etc. – PaaS) or Office 365 (SaaS).
  • Secure network with Network Access Control Lists (ACLs), attached to a VIP – define what traffic will be allowed/denied to/from the VIP (i.e. the cloud service). Lower number rule has higher priority. First match is executed and rest are ignored.
    • If there is no ACL – all traffic is allowed (whatever endpoints are open will allow access); if there is one or more permit, deny all others; if there is one or more deny, allow all others; combination of permit and deny to define a specific IP range.
    • Network ACL affects Incoming traffic only.
  • Network Security Groups (NSGs) are attached to a VM or a subnet and act on both inbound and outbound traffic.
    • By default all inbound access is blocked inbound rules (allow inbound within VNet and from Azure LB; deny all other inbound – rules 65000/65001/65500).
    • Outbound defaults allow outbound within VNet outbound, Internet outbound (0.0.0.0/0) and deny all others – rules 65000/65001/65500.
    • Default rules can’t be edited but can be overridden with higher priority rules.
  • Can only use Network ACLs or NSGs – not both together.
  • VMs can have multiple NICs in different subnets – i.e. dual-homed machine.

Azure Virtual Machines

  • Azure Hypervisor is similar to Hyper-V (but not the same).
  • Different sizes of VMs are available.
  • VMs are isolated at network and execution level – Azure customers never get access to the hypervisor – only to the VM layer.
  • Use Windows Server 2008 onwards or Linux: OpenSUSE; SUSE Enterprise Linux; CentOS; Ubuntu 12.04; Oracle Enterprise Linux; CoreOS; OpenLogic; RHEL
  • Basic and Standard service tiers – different machine types available:
    • General Purpose: A0-A4 Basic; A5-A7 Standard; A8-A9 Network Optimised (10Gbps networking); A10-A11 Compute Intensive (high end CPUs)
    • D1-D4, D11-D14 with SSD temp storage.
    • DS1-DS4, DS11-14 with premium (SSD) storage.
    • G1-G5 (and GS) with local SSD and lots of RAM.
    • F and N too

  • Every Azure VM has temporary storage drive (D:) – lost when VM is moved/restarted.
  • VMs may be attached to data disks that persist across VM restarts/redeployments and are locally replicated in-region (and beyond if specified).
  • Can use gallery images or create custom images (to meet custom requirements, e.g. with certain software pre-installed).
  • OS disk always has caching, default Read/Write (data disk caching is optional, default none) – changes need a reboot.
  • Can create a bootable image from an OS disk (not data disk).
  • Can change caching on data disk without reboot.
  • OS disk max 127GB, data disk max 1TB.
  • Only charged for storage used (regardless of what is provisioned).
  • Can take VHDs from on-premises: (Windows Server 2008 R2 SP1 or later), sysprep then upload with Add-AzureVhd -Destination storageaccount/container/name.vhd -LocalFilePath localfile.vhd; for Linux install WALinuxAgent (different preparation for different distributions).
  • Tell cloud service to load balance an endpoint to split load between VMs. With ARM there is the option to define a separate Load Balancer.
  • Encryption at rest for data disks requires third party applications (encryption is in preview though…).
  • Availability set: 2 or more VMs distributed across fault domains and upgrade domains for SLA of 99.95% (no SLA for single VMs).
  • Auto-scaling based on thresholds (mix/max number of instances, CPU utilisation, queue length – between web and worker roles) or time schedule (also time to wait before adding/removing more instances – AKA cooldown period). Needs at least 2 VMs in an availability set.
  • Basic VMs have no load balancing or auto-scaling.

Azure Storage Service

  • Blob, table, or queue storage (plus file storage for legacy apps) encapsulated inside a storage account.
    • Two types: Standard/Premium – essentially HDD/SSD.
    • Up to 500TB per storage account – can create multiple accounts.
  • Data stored in multiple locations (minimum 3 copies).
    • LRS (Locally Redundant Storage) synchronously replicates 3 copies data in separate fault and update domains. Use for: low cost; high throughput (less replication); data sovereignty concerns re: transfer out of region. If region goes down, so do all copies.
    • ZRS (Zone Redundant Storage) also 3 copies but in at least 2 facilities (1 or 2 regions). Data durable in case of facility failure.
    • GRS (Globally Redundant Storage) – 6 copies (3 copies in primary region asynchronously replicated to 3 more copies in a secondary region). Data still safe in a secondary region but cannot be read (unless Azure flips primary and secondary in event of catastrophic failure).
    • RA-GRS (Read Access Geo Redundant Storage) – read from secondary copy. -secondary.cloud.core.windows.net domain name.
  • More copies and more bandwidth is more cost! Also:
    • GRS ingress max 10 Gibps (20 egress) but does not impact latency of transactions made to primary location.
    • LRS ingress max 20 Gibps (30 egress)
  • File storage – mounted by servers and accessed via API. Provides shared storage for applications using SMB 2.1. Use cases:
    • On-premises apps that rely on file shares migrated to Azure VMs or cloud services without app re-write.
    • Storing shared application settings (e.g. config files) or diagnostc data like logs, metrics and crash dumps.
    • Tools and utils for developing or administering Azure VMs or cloud services.
    • Create shares inside storage accounts – up to 5TB per share, 1TB per file. Unlimited total number of files and folders.
    • https://storageaccountname.file.core.windows.net/sharename/foldername/foldername/filename
  • Blob storage: Not a file system – an object store.
    • Create containers inside storage accounts with up to 500TB data per container
    • https://storageaccountname.blob.core.windows.net/containername/blobname
    • Block blobs, with block ID; uploaded and then committed – unless committed doesn’t become part of the blob: max 64MB per upload (blocks <=4MB), max 200GB per blob; Can upload in parallel, better for large blogs (generally) and for sequential streaming of data.
    • Page blobs – collection of 512byte pages. Max size set during creation and initialisation (up to 1TB). Write by offset and range – instantly committed. Overwrite single page or up to 4MB at once; Generally used for random read/write operations (e.g. disks in VMs). Page blobs can be created on premium storage for higher IOPs.
    • Access control is via 512bit keys (secret key – used in API calls to sign requests) – two keys so can maintain connectivity whilst regenerate another (i.e. during key rotation).
    • Can have full public read access for anonymous access to blobs in a container; public read access for blobs only (but not list the blobs in the container); no public read access (default – only signed requests allowed); shared access signature – signed URL for access including permissions, start time and expiry time.
    • Lease blob for atomic operations – lease for 15-60 seconds (or infinite). Acquire/renew/change/release (immediately)/break (at lease end).
    • Snapshots – used to create a read-only copy of a blob (multiple snapshots possible but cannot outlive the original blob – i.e. deleting blob deletes the snapshots); charges based on difference.
    • Copy blob to any container within the same storage account (e.g. between environments).
  • Table storage:
    • Store data for simple query – NoSQL key-value store – no locks, joins, validation.
    • http://storageaccountname.table.core.windows.net/tablename
    • Generally, use row key to retrieve data.
    • Can partition tables and generate a partition key.
    • Use shared access signatures for querying/adding/updating/deleting/upserting (insert if does not already exist, else update) table entries
  • Queue storage:
    • Store and access messages through HTTP/HTTPS calls.
    • Each queue entry up to 64KB in size.
    • Store messages up to 100TB.
    • Use for an asynchronous list for processing; messaging layer between applications (avoid handshaking – just add to or consume from the queue); or messaging between web and worker roles.
    • http://storageaccountname.queue.core.windows.net/queuename
    • Operations to put (add), get (which makes message invisible), peek (get first entry without making invisible), delete, clear (all), update (visibility timeout or contents) for messages.
  • Pricing based on storage (per GB/month); replication type (LRS/ZRS/GRS/RA-GRS); bandwidth (ingress is free; egress charged per GB); requests/transactions.

Web Apps

  • Web Apps are available in 5 tiers: free/shared/basic/standard/premium.
  • These tiers affect: the maximum number of web/mobile/API apps (10/100/unlimited/unlimited/unlimited), logic apps (10/10/10/20 per core/20 per core, integration options (dev/test up to basic; Standard connectors for Standard; Premium Connectors and BizTalk Services for premium), disk space (1GB/1GB/10GB/50GB/500GB), maximum instances (-/-/3/10/50), App Service environments (Premium only), SLA (Free/shared none; Basic 99.9; Standard and Premium 99.95%)
  • Resource Group and Web Hosting Plan are used to group websites and other resources in a single view; can also add databases and other resources; deleting a resource group will delete all of the resources in it.
  • Instance types:
    • Free F1.
    • Shared D1.
    • Basic B1-B3 1 core, 1.75GB RAM, 10GB storage x2 cores and RAM (2/3.5; 4/7) – VMs running web apps.
    • Standard S1-S3 same cores and RAM but more storage (50GB).
    • Premium P1-P4 same again but 500GB storage (P4 is 8 cores, 14GB RAM).
  • Other things to configure:
    • .NET Framework version.
    • PHP version (or off).
    • Java version (or off) – use web container version to chose between Tomcat and Jetty; enabling Java disables .NET, PHP and Python.
    • Python version (or off).
  • Scale web apps by moving up plans: Free-Shared-Basic-Standard – changes apply in seconds and affect all websites in web hosting plan. No real scaling for Free or Shared plans. Basic can change instance size and count. Standard can autoscale based on schedule or CPU – min/max instances (checked every 5 mins).
  • Scale database separately.
  • Deployment pipeline can be automated and can flip environments when move from staging to production (flips virtual IP). Can flip back if there are issues.
  • SSL certificates – can add own custom certs (2 options – server name indication with multiple SSL certs on a single VM; or IP SSL for older browsers but only one SSL cert for IP address).
  • Site extensions – no RDP access to the VM, so tools for website: Visual Studio Online for viewing code or phpMyAdmin.
  • Webjobs allow running programs or scripts on website (like cron in Linux or scheduled task in Windows) – one time, schedules or recurring.
  • Can use .cmd, .bat or .exe; .ps1, .sh., php, .py, .js
  • Monitoring web app via metrics in the portal.

Cloud Services

  • For more complex, multi-tier apps.
    • Web role with IIS
    • Worker role for back-end (synchronous, perpetual tasks – independent of user interaction; uses polling, listening or third party process patterns).
  • Upload code and Azure manages infrastructure (provisioning, load balancing, availability, monitoring, patch management, updates, hardware failures…)
  • 99.95% SLA (min 2 role machines)
  • Auto-scale based on CPU or queue.
  • Communicate via internal endpoints, Azure storage queues, Azure Service Bus (pub/sub model – service bus creates a topic, published by web role and worker role subscriber is notified).
  • Availability: fault domain (physical – power, network, etc.) – cannot control but can programmatically query to find out which domain a service is running in. In ASM, normally 0 or 1. ASM automatically distributes VMs across fault domains.
  • Upgrade domain (logical – services stopped one domain at a time) – default is 5, can be changed.
    If have web and worker roles, automatically placed in Availability set.
  • Azure Service Definition Schema (.csdef file) has definitions for cloud service (number of web/worker roles, communications, etc.), service endpoints, config for the service – changes required restart of services.
  • Azure Service Configuration Schema (.cscfg file) runtime components, number of VMs per web/worker role and size etc. – changes do not require service restart.
  • Deployment pipeline as for Web Apps.

Azure Active Directory

  • Identity and Access Management in the cloud – provided as a service.
  • Optionally integrate with on-premises AD.
  • Integrate with SaaS (e.g. Office 365).
  • Use cases: system to take care of authentication for application in the cloud; “same sign-on” for applications on-premises and cloud; federation to avoid concerns re: syncing passwords and avoid multiple logins to different apps (even with same sign-on) – provide single sign-on; SSO for 1000s of third-party applications. Effectively, if sync password then same sign-on, if no password sync then single sign-on.
  • Can also enable Multi-Factor Authentication (MFA) for Azure AD and therefore add MFA to third party apps.
  • Directory integration with Azure Active Directory Synchronization Tool (DirSync) or Azure AD Sync. Use Azure Active Directory Connect instead.
  • Can also use Forefront Identity Manager 2010 R2 (or Microsoft Identity Manager?) – originally was needed if sync multiple ADs.
  • Each directory gets a DNS name at .onmicrosoft.com. Also possible to use custom domains (verify domains in DNS).
  • Supports WS-Federation (SAML token format); OAuth 2.0; OpenID Connect; SAML 2.0.

Role-based access control

  • Role = collection of actions that can be performed on Azure resources.
  • Users for RBAC are from the associated Azure AD.
  • Roles can be assigned to external account users by invite.
  • Roles can be assigned to Azure AD security groups (recommended practice, rather than direct role assignment).
  • Roles can also be assigned for Resource Groups (resources inherit access from subscription-Resource Group-Resource).
  • Built-in roles: Owner (create and manage all types of resource); Reader (read all types of resource); Contributor (manage everything except access). Lots of other roles built on this construct – e.g. Virtual Network Contributor.

Azure SQL Database

  • Relational database service as a service (PaaS) – up to 500 GB per database.
  • Easy provisioning, automatic HA, load balancing, built-in management portal, scalability, use existing skills to deploy database, patching, etc. taken care of so less time to manage, easy sync with offline data.
  • It is not same as SQL Server on a VM though!
    • Unsupported features may have corresponding features in Azure; some are just not available.
  • Performance model with different tiers: Basic, then Standard S0-S3, Premium P1-P2, P4, P6 (formerly P3).
    • Measured in Database Thoughput Units (DTUs) – standardised model to help sizing (relative model [like ACU for VMs]).
    • Only committing to transactions per hour in Basic, per minute in Standard, per second in Premium.
  • Scaling Azure SQL: Federation is deprecated; Custom Sharding (create multiple database and use application logic to separate, e.g. based on customer ID); Elastic Scale (application doesn’t need to be so smart, endpoint is same but multiple applications).
  • Backups:
    • SQL database creates automatic backup for active database; at least 3 replicas at any one time – one primary replica and two or more secondaries (more if using GRS).
    • Can restore to point-in-time (self-service capability to restore from automated system – creates new database on same server – zero-cost/zero-admin – number of days depends on service tier – 7, 14, 35 days for basic/standard/premium), or geo-restore (restore from geo-redundant backup to any server in any region.
    • Automatically enabled for all tiers at no extra cost – helps when there is a region outage – estimated recovery time <12h RPO <1h).
  • Also standard geo-replication (protect app from regional outage – one secondary database in Microsoft-defined paired region; secondary is visible but can’t connect to it until failover occurs – discount for secondary DB as offline until failover – standard/premium only with ERT <30s RPO <5s) and active geo-replication (database redundancy within different regions – up to 4 readable secondary servers – asynchronous replication of committed transactions from one DB to another; for write-intensive applications – e.g. load balancing for read-only workloads – premium only with ERT <30s RPO <5s).
    • Regional disaster – Geo Restore, Standard or Active Geo-Replication.
    • Online application upgrade – Active Geo replication.
    • Online application relocation – Active Geo replication.
    • Read load balancing – Active Geo replication.
  • Security: only available via TCP 1433 – blocked by default – define firewall rules at server and database level to open up (i.e. to own IP address). Can define firewall rules programmatically with T-SQL, REST API and Azure PowerShell.
  • Data encrypted on wire – SSL required all the time
  • Data encrypted at rest – encryption with transparent data encryption – real-time I/O encryption/decryption for data and log files.
  • Only supports SQL Server authentication or Azure AD authentication – i.e. no Windows authentication.
  • First user created (master database principal) cannot be altered or dropped; can configure user-level permissions by logging on to the database and issuing SQL commands.
  • Pricing: DB size plus outbound data transfers (per database, per month) – per hour pricing, so drop DTUs at quiet time.

Azure Mobile Service

  • Cross-platform app development service (PaaS).
  • Mobile apps need to be cross-platform, with cloud storage, ID management, database integration and push notifications.
  • Azure Mobile Services provides mobile back-end as a service (MBaaS).
  • Easily connect to SaaS APIs – e.g. Facebook, Salesforce, etc.
  • Auto-scaling based on incoming customer load.
  • User authentication taken care of by the service.
  • Push notifications to millions in seconds.
  • Offline-ready apps with sync capability.

Azure Content Delivery Network (CDN)

  • Caching public objects from a storage account at point of presence (POP) for faster access close to users (and to scale when a lot of traffic hits).
  • Content served from local edge location. If content not there (first serve), it fetches information from the origin and caches locally.
  • Drastic reduction in traffic on original content (so faster access and more scalable!)
    Use a CDN for lower latency, higher throughput, improved performance!
  • POP locations separate to Azure regions – not full-fledged DCs.
  • CDN origin can be Azure Storage, Apps, Cloud Services or Media Services (including live streaming) – or a custom origin on any web server.
  • CDN Edge is a cache – not a permanent store.
  • Anycast protocol is used to route user to closest endpoint.
  • Create a CDN endpoint: http://cdnname.azureedge.net/
  • Change website code to point to the CDN. Route dynamic content to origin, static to CDN.
  • Can set a custom domain too (e.g. cdn.domain.com) – avoid browser warnings about content from other domains.
  • Can also enable HTTPS – need to upload the SSL certificate.
  • Default cache is 72 hours – cache control header can be used to control (any value >300s). Use to ensure not serving stale content.
  • Use CDN to cache images, scripts, CSS from Azure Cloud Service but have to provide using HTTP on port 80.
  • Pricing based on bandwidth (between edge and origin) and requests.

Azure Traffic Manager

  • DNS-based routing for infrastructure. Route to different regions, monitoring health of endpoints (HTTP checks) to assist with DR. Many routing policies.
  • Create a Traffic Manager endpoint and route to this via DNS.
  • Options include failover load balancing (re-route based on availability, with priority list – 100% of traffic to one endpoint – used for DR/BC rather than scaling); round robin load balancing (shared across various endpoints in rotation – but only to healthy endpoints cf. DNS RR); Weighted round robin load balancing (use weight to distribute traffic between endpoints); performance load balancing (based on latency times).
  • Different to traditional load balancer in that it is DNS-based – user request is direct to endpoint, not through load balancer. Also, note that traffic is direct to web servers – not to Edge locations as in CDN.
  • Pay per DNS request resolved (TTL will keep this down) and per health-check configured.

Azure Monitoring

  • Diagnostic tasks may include performance measurement, troubleshooting and debugging, capacity planning, traffic analysis, billing and auditing.
  • Monitor via portal; Visual Studio (plugins to parse logs, etc.) or third party tools.
  • Azure management services to manage alerts or view operational logs. Create alerts based on metrics and thresholds (and average to smooth out spikes) and send email to service admins and co-admins or to a specific address.
  • Operational logs are service requests – operation, timestamped, by whom.
  • Visual Studio 2013 has Azure SDK for managing Azure services. Some limitations: with remote debugging cannot have more than 25 role instances in a cloud service.
  • Azure Redis cache monitoring allows diagnostic data stored in storage account – enable desired chart from Redis cache blade to display the metric blade for that chart.
  • System Center 2012 R2 can also monitor, provision, configure, automate, protect and self-service Azure and on-premises.
  • Third party tools like New Relic and AppDynamics.
  • For websites there are application diagnostic logs and site diagnostic logs (3 types: web server logging; detailed error messages; failed request tracing) – access via Visual Studio, PowerShell or portal. Kudu dashboard at https://sitename.scm.azurewebsites.net.
  • View streaming log files (i.e. just see the end): Get-AzureWebsiteLog -Name "sitename" -Tail -Path http
  • View only the error logs: Get-AzureWebsiteLog -Name "sitename" -Tail -Message Error
  • Options include -ListPath (to list log paths) -Message <string> -Name <string> -Path (defaults to root) -Slot <string> -Tail (to stream instead of downloading entire log)
  • Can also turn on diagnostics on storage accounts.

Azure HD Insight

  • Microsoft Implementation of Hadoop – create clusters in minutes (Windows or Linux); pay per use (no need to leave running); use blob storage as storage layer and Excel to visualise the data.
  • Hadoop uses divide and conquer approach to solving big data problems (chunking): processes the data, then combines it again – using HDFS and MapReduce components.
  • Provision cluster, take large data set (e.g. search engine queries) on master node, distributed to processing nodes (Map). Reduce collects results and collates.
  • Hybrid Hadoop – e.g. for organisations that offer analytics services – burst to cloud…
  • Either site-to-site VPN on-premises to Azure, or ExpressRoute.
  • Supports Storm and HBase clusters natively – can install other software via custom script.
  • Connectors in WebApp (Standard and Premium) – connect to other services (e.g. Azure HDInsight).

High Performance Computing (HPC)

  • HPC not the same as big data:
    • Big data analytics is usually bounded by data volumes and so network IO.
    • HPC usually CPU-bounded.
  • HPC good for financial modelling, media encoding, video and image rendering, smaller compter-aided engineering models, etc.
  • HPC instances are A8/9 (network optimised – high-bandwidth RDMA network 32Gbps within cloud service as well as 10Gbps Ethernet to other services) and A10/11 (compute intensive).
  • Both 8/16 cores, 56/112GB RAM, 382GiB disk.
  • Microsoft HPC Pack 2012 R2 SP1 on Windows Server (on-premises, in Azure or hybrid) – Message Passing Interface (MPI) used (over RDMA network).

Azure Machine Learning

  • Predictive analysis in cloud – as a service, no VMs etc. to manage.
  • Take existing data, analyse by running predictive models and predict future outcomes/trends.
  • Deploy in minutes; drag and drop machine learning algorithms (built-in); use data in Azure; add custom scripts; Marketplace of vendors providing custom solutions.
  • Terminology:
    • Classification (group data).
    • Regression (predict a value).
    • Ranking (order items by criteria).
    • Clustering (take a set of data, e.g. by date range).
  • Get raw data (unstructured or losely structured) -> data cleaning -> build machine learning model -> predict results.

Azure Automation

  • Script and automate the application lifecycle; simplify cloud management; automate manual, long-running and frequently-repeated tasks (save time and increase reliability).
  • Works with Web Apps. Virtual Machines, Storage, SQL Server and other Azure services.
  • Automation account is a container for Azure Automation resources.
  • Create runbooks – set of tasks that perform an automated process – PowerShell workflow.
  • Scheduler to start run-books daily/hourly/at a defined point in time.
  • Pricing based on minutes/triggers:
    • Free = 500 minutes
    • Basic tier
    • Standard tier
  • Automation is an enabler for DevOps:
    • Dev team loves changes.
    • Ops Team loves stability.
    • Agile used for development between business-dev.
    • DevOps fills gap between dev and ops.
    • Infrastructure as code; configuration automation; automation testing.
  • Continuous integration – pipeline to delivery and deployment – cycle of integrating solution with various phases:
    • Delivery team check-in to Version Control, triggers Build and Unit Tests (with Feedback). When Build and Unit tests are clean, triggers Automated Acceptance tests (with feedback). When approval gained, move to User Acceptance Tests, and then on FInal Approval move to release.
  • Continuous Delivery – push-button deployment of any version of software to any environment, on demand – similar to CI but can feed business logic tests.
    • Need automated testing to achieve CD.
  • Continuous Deployment – natural extension to CD; every check-in ends up in a production release.
  • Chef for Configuration Automation: Configuration Management between environments: Build, Test, Release, Deploy (and automate CI/CD). Manage Windows and Linux VMs, integration via Azure Portal. Chef and DSC can be used together to manage infrastructure.
  • Puppet – integrated with Azure and VS 2013 for easy deployment of infrastructure across physical and virtual machines. Can deploy pre-configured Puppet image to create a VM.
  • Deploy Custom Script with VM configuration – run when VM is launched (one of the available config extensions).
  • VM agent is used to install and manage extensions that help interact with the VM (Chef, Puppet, Custom Script).

Azure Media Services

  • Developing video on demand is challenging: cost/managing content/encoding/distribution across multiple devices/streaming experience/DRM content protection/providing high quality video for any device any time anywhere.
  • Ingest data, encode, format conversion, content protection (DRM policies), on-demand streaming, live streaming, analytics, advertising.
  • Need media service account and associated storage account.
  • Media Player is web video player service backed by Azure Media Service: one player for all popular devices – no need to develop device-specific player; plays format for that device; easy intergtaion with web and apps; standard player controls.
  • Data caching via Azure CDN.
  • Steps:
    • In management portal, create new Media Service with name, storage account and region.
    • Start the Media Service.
    • Scale up streaming units (1 unit=200Mbps).
    • Upload a video file (from local or from Azure storage) – will be stored in storage account without encryption.
    • Publish the file.
    • Configure the encoding options, then video is uploaded into portal (can encode multiple times for different formats with different names).
    • View the media content (copy link into browser).

Azure Resource Manager

  • With ASM even a VM has a cloud service.
  • ARM is pure IaaS, not necessarily cloud service.
  • Deploy, manage and monitor services as a group; deploy repeatedly throughout the application life cycle; use declarative templates to define deployment; can have dependencies between resources; apply RBAC; organise logically by tagging.
  • ASM tightly couples to cloud service – VM in subnet, in VNet, in cloud service, in region, with VIP for DNS and public IP.
  • ARM is more loosely coupled – can have multiple VIPs, NICs, etc. All in a RG (which can span regions). Attached via reference.
 ASM XML  ARM JSON
VM deployment  Cloud service as container  Does not require a cloud service
Availability set Define VMs under same availability set Availability set is a resource exposed by the Microsoft.Compute provider – VMs that need HA must be included in availability set
Fault domain  Maximum 2 fault domains  Maximum 3 fault domains
Load balancing Cloud service provides an implicit load balancer for the VMs  The load balancer is a resource exposed by the Microsoft.Network provider
Virtual IP address Default static VIP as long as one VM running in the cloud service Public IP is a resource exposed by Microsoft.Network – can be static (reserved) or dynamic
Reserved IP address Reserve an IP address in Azure and associate with a cloud service Public IP can be created as static and assigned to a load balancer
  • Choose deployment mode when provisioning resources. Limited inter-operability so choose the right model.
  • Deploy using
    • Portal
    • PowerShell: Switch-AzureMode -Name AzureResourceManager
    • ARM REST API
    • Azure CLI: azure config mode arm
  • Resource Manager template – JSON document – deploys and provisions all of the related resources in a single, co-ordinated operation.
  • Tags are key-value pairs of metadata: applied to individual ARM resources or ARM RGs – up to 15 tags per Resource or RG
    RBAC – Owner, Reader or Contributor.

Azure Messaging Solutions

  • Service Bus: multi-tenant cloud service – each user creates a namespace to work within.
    • Queues – one-way communication, asynchronous queuing with guarantee of message delivery order (worker has to keep polling).
    • Topics – let each receiving application create a subscription by defining a filter (avoid polling – get notification instead) – pub-sub model. Read with RecievAndDelete or PeekLock; can have multiple subscribers.
    • Relays – synchronous 2 way communications between applications – won’t help with buffering.
  • Event hubs – highly scalable ingestion system that can process millions of events per second (e.g. for IoT).
  • Can also queue via storage – more options with service bus but more scalable with storage.

Azure Backup

  • Backup service targeted at replacing tape backup.
  • Can work with on-premises workloads or Azure workloads.
  • On-premises backup – pick region and create a vault; download vault credential files; download and install Azure backup agent; can seed through Azure Import/Export Service; select backup policy (start time of backup (retention policies (weekly/monthly/yearly)) – backups are incremental.
  • Azure VM Backup – install agent if not already installed, register VMs with Azure Backup Service (installs backup agent in extensions); select backup policy.
  • Azure backup is to backup data on VM. Priced per protected instance and storage consumed (price for protected instance goes up at 50GB, then 500GB, then each additional 500GB.

Azure Site Recovery

  • Orchestrates failover and recovery of a VM.
  • On-premises machine replicated to vault in Azure, or to another datacentre – not Azure to Azure.
  • Protect AD and DNS, SQL Server, SharePoint, Dynamics AX, RDS, Exchange, SAP.
  • Can also perform a test failover, starting resources in Azure but not routing the traffic.
  • Use to protect VMware ESX or Hyper-V VMs or physical servers and can be used to migrate to Azure

Business continuity (BC) and disaster recovery (DR)

  • Scenarios: recover from local failures; loss of a region; on-premises to Azure
  • For Azure failures:
    • HA in PaaS (per region), just make sure web and worker roles 2 or more roles each – then will automatically be spread across fault domains.
    • For region failure need to plan across regions – more elaborate (make sure code and config is available in a second region).
  • HA in IaaS needs management of VMs in availability sets (need to define define manually).
  • At region level, also think about load balancing (VIP), storage (LRS, ZRS, GRS of RA-GRS), Azure SQL replication.
  • Recover from loss of region:
    • Redeploy on disaster (cold DR) – replicate data ready to run (not high RTO/RPO)
    • Warm spare (active/passive) – infrastructure in DR region but not fully available (e.g. SQL replication with secondary copy not accessed, not routing traffic to passive).
    • Hot spare (active/active) – two regions at the same time (e.g. SQL on IaaS and replicating itself).
  • Cross regional strategies for DR:
    • VNet – export settings, import in secondary region.
    • Cloud Services – create a separate cloud service in target region; publish to secondary region if primary files; use Traffic Manager to route traffic.
    • VM – use blob copy API to duplicate VM disks; geo-replicated VM images.
    • Storage – use GRS or RA-GRS (replicated in minutes, so tight RPOs cannot rely on this – need to write own algorithm).
    • Azure SQL:
      • Geo-restore (1 hour RPO/<12 hours RTO).
      • Standard geo-replication (5 secs RPO/30 mins RTO) – no access to secondary.
      • Active geo-replication (5 secs RPO/30 mins RTO) – read access to secondary.
      • Manually export to Azure Storage (blob) with Azure SQL database import/export service.

Securing Azure Resources

  • Cloud security model is shared security model:
    • Users are responsible for securing applications.
    • Cloud Service Provider (CSP) is responsible for providing controls; users for using them!
    • CSP is responsible for infrastructure security.
  • VNet/VM security: use endpoints (ACL for endpoints, NSGs at VM or VNet level).
  • Storage: use shared access signatures.
  • Role-based access control.
  • Encryption.

Preparation notes for Microsoft exam 70-341: Core Solutions of Microsoft Exchange Server 2013


I’ve got an exam tomorrow. I figured that, as I’m employed as a manager and not an administrator, my best chance at passing the Core Solutions of Microsoft Exchange Server 2013 (70-341) exam was to take the test soon after attending the training course. So, tomorrow morning at 10am, I’ll be in a Prometric test centre whilst the sun is shining outside…

I learn by writing, so what follows are my notes – a brain dump if you like but based purely on study – I’ve not seen the exam (or any practice tests) yet…

Fingers crossed, I’ll be fine.  If I do fail, it will be the first time I’ve had to re-take a Microsoft exam – here goes with the brain dump.

[Updated 2013/11/22 – after failing 70-341 twice and 70-342 once – so far – I’ve removed the content of this post as it’s clearly no help to anyone!]

A few TOGAF 9 post-exam notes…


Earlier today, I blogged about my preparation for the TOGAF 9 combined part 1 and part 2 exam, which should lead to me becoming TOGAF certified.

Now that I’ve taken the test, I just wanted to share some more experiences that might help people looking to do the same. I won’t say anything about the test content as there are strict disclaimers about that sort of thing – my post earlier today outlined my study/revision approach though (and it obviously worked as I passed the test) but here are a few extra pointers that might be useful:

  • The 4-hour time slot includes registration, pre-post exam questionnaires etc. (on this occaision, those weren’t offered to me) and the actual exam is 150 minutes long (as the courseware tells us – I believe there are slightly longer sessions for those who don’t speak English natively).
  • I found that I completed part 1 (40 questions, with a required pass mark of 55%) in about half the 60 minutes that are allocated but that didn’t give me extra time to use for part 2 (it’s still 90 minutes).
  • All of the responses are multiple choice, and you can mark questions to go back review them at the end, before moving on.
  • Confusingly, at the end of part 1, the only option is to “end” the exam – don’t worry, it does continue to part 2, even though it’s not clear that it will do so.
  • Part 2 is only 8 questions (for which the required pass mark is 60%), of the scenario-type with graded scoring (5 points for best answer, 3 for next, 1 for the least-best answer, and 0 for the distractor). I needed all of that time with some questions requiring reference to the TOGAF manual (provided electronically, more on that in a moment). If you allow 5 minutes per question to fully read the scenario and understand what is being asked of you, that doesn’t leave a lot of time to search the TOGAF reference, so it’s better not to rely on it too much and to save that for when you really need it!
  • I didn’t expect to get a score for part 2 immediately (at least not based on the advice from The Open Group) so wasn’t sure if I would get my part 1 score today either. Needless to say I was pleasantly surprised to find that scores were given for both parts 1 and 2 at the end of the test (a combined score on screen, and individual part 1 and part 2 scores on the test result certificate).

Prometric test centres have been dire since I first started taking Microsoft exams in the late 1990s (later I took some VMware ones too) but it seems nothing has changed. The test booking site feels like it was specified by the same user experience designers as the London Olympics ticketing site, with no ability to search for centres based on post code (I had to scroll through 5 pages of test centres, looking at each one to see if it was near me and had availability to book a test on the date I required). The centre I visited today had newer PCs than I’ve experienced in the past – even a widescreen monitor – but the software still looks like something from Windows 3.1 and the resolution was still 1024×768 (stretched, and spilling over the edge of the visible display!). That caused some challenges with the scenario-based questions (scenario on the left, answers on the right) – thankfully the keyboard allowed me to scroll as the on-screen controls were not visible…

Add to that the fact that I couldn’t even take a bottle of water in with me (some earplugs would have been nice too) and that the reference lookup of the TOGAF manual in the open book part of the test ran in an awful PDF browser that has terrible search facilities (and which crashed on me, requiring the test centre to restart the PC running my exam – thankfully back to the same state it was in before the crash!) – in all it’s not a very good user experience.

Hopefully all of this helps those who are less familiar with Prometric tests to prepare for their exam.  Good luck!

[Update: I just found some advice for those who are less successful – according to The Open Group, if you fail and you attended an accredited training provider, then you should contact the training provider for a retake voucher)

Getting my head around Enterprise Architecture (specifically TOGAF)


After many years designing and implementing technology infrastructure, I’ve been trying to move “up the stack” out of the (multiple) domain architect space, towards solutions architecture and onwards to develop as an enterprise architect. That involves a mindset change to progress from the role of a designer to that of an architect but I’m on my way… and I currently manage roadmaps, portfolios (standards) and reference architectures (amongst other duties), so it might be useful to know a bit about Enterprise Architecture…

I thought it might help to get certified in The Open Group Architecture Framework (TOGAF) and I spent a week on a TOGAF 9 training course last year following which I received a voucher to sit the combined part 1 and 2 exam. At the time of writing I don’t know how successful I’ve been – in fact, this post is timed to go live at the moment when I’ll be sitting at a Prometric testing station, no doubt getting frustrated with a single monitor and limited screen resolution as I try to search a PDF of the TOGAF manual at the same time as answering questions… but, even so, I thought I’d share my revision experience for the benefit of others.

For reasons that I won’t go into here, there was a gap between my course and my exam voucher being released so I wasn’t able to take it whilst the content was still fresh in my mind. Several months later, I set aside a week to spend four days revising the content, and reading around the topic, before taking the exam at the end of the week but I found it hard to revise – my main strategy was to going over the course content again, along with a variety of other resources – all of which were highly textual (even the diagrams are unattractive) and, above all, excruciatingly dull.

I decided I needed some visual content – not just diagrams but some animated content describing key TOGAF concepts would have been fantastic. I didn’t find anything like that, but I did find a series of videos recorded by Craig Martin, from Knotion Consulting in South Africa (thanks to Sunil Babu for his blog post that provided the tip).

The first and last two minutes are, understandably, an advert for the training that Knotion provides but then Craig gets into a really easy to understand overview of TOGAF and broader enterprise architecture concepts, even diving into service oriented architecture (SOA) at one point.  These are freely available on YouTube but, based on watching them, I would suggest that Craig could package up some training content for remote delivery and it would be a worthwhile investment for people in the same situation as me. In fairness, I did start to get lost towards the end, and the overview doesn’t seem to strictly follow the TOGAF materials (that may be seen as a good thing!) but the first hour was really useful – there is definitely a market for high quality subscription-based training in this space. Remote delivery ought to drive down the costs and it would certainly be better than the Architecting  The Enterprise course that I attended (of course, that’s a personal view and your mileage may vary – I’m sure many people enjoy hours and hours of very dull PowerPoint content mixed with some group exercises and squeezed into 4 days when 5 would be more appropriate…).

Of course, Craig’s 90 minute introduction isn’t everything I need to pass the exam but it has helped to cement a lot of concepts in my mind. After watching the videos, I stopped working through the course materials in detail, and concentrated on a more general understanding of the Architecture Development Model (ADM) and the related TOGAF concepts. The TOGAF Version 9 Pocket Guide (which was provided on my training course) helped here, as did the Practice Test Papers (also from the course but available online for a fee).  Other potentially useful resources include:

I’m still not sure I have enough knowledge to pass the exam (we’ll see – my scores in the practice tests were OK but not outstanding) but I do feel better prepared and, if anyone finds some useful, modern, engaging aids to learning about enterprise architecture in general and TOGAF specifically, then please do leave a comment!

Microsoft E-Learning courses: the good, the bad and the ugly


The couple of weeks leading up to Christmas involved a lot of intense revision for me, as I prepared for the Microsoft exams to finish updating my MCSE on Windows Server 2003 to MCITP Enterprise Administrator.

When I set out to do this, I had originally intended to combine the tasks of reviewing John Savill’s Complete Guide to Windows Server 2008 with getting ready for my exams but it soon became apparent that I simply didn’t have enough time to work my way through the entire volume (excellent though it is!). Instead, I used the Microsoft-published exam preparation guides to identify the recommended Microsoft E-Learning courses.

If I’d written a review of the courses after the first couple of days it would have been a glowing recommendation – and in some respects perhaps I should be holding off on this review as I am somewhat battle-weary; however, having just taken two certification exams based on this study method it seems as good a time as any to assess the suitability of these courses.

The good

Starting out with the selection, there is a huge catalogue of courses available which mirror the Microsoft Official Curriculum instructor-led courses. The prices are not bad either, when compared with classroom training; however, in many ways, I prefer the interaction that a classroom environment provides.

The format of the courses is good – built up as a number of virtual classroom modules, with a mixture of demonstrations and animations (with transcripts), textual content, and puzzles/tests in each lesson. Each lesson ends with a self-test and there is a summary and a glossary at the end of each module. There’s also a full-text search capability.

It’s possible to synchronise the content with a local cache to provide offline viewing – indeed, I only used the courses online for one day (when I was in the office and the proxy server wouldn’t let me download some new courses for offline working – the offline player includes the ability to edit proxy settings in the options but is not exposed in Windows until after successfully downloading and launching a course – and online viewing required me to add microsoftelearning.com to Internet Explorer’s trusted sites list) but it’s important to note that the virtual labs must always be completed online (this functionality is not available in the offline viewer).

The bad

Somewhat annoyingly, the course overview (which is the same for each course) and the glossary are included in the progress count, so after completing all of the available lesson content, most of the courses I attended were marked as only partially completed (it is possible to mark a course as complete in the My Learning section of the Microsoft Learning website but this will not complete the course in your transcript).

I could almost forgive elements like this, but the next annoyance really affected my ability to learn. You see, I’m English, and I will admit that sometimes I find it difficult to listen to an American accent for a long period of time (that’s nothing personal – I’m sure the same happens in reverse). But the demonstrations and animations in these courses are recorded in an American monotone – and it doesn’t even seem to be human. After listening to a few of these, with misplaced paragraph breaks and identical pronunciation for recurring words, regardless of sentence structure and intonation (or lack of), they actually become very difficult to concentrate on. Towards the end of my revision I stopped working through entire courses and instead concentrated on the introductions, summaries, and making sure I could complete the puzzles and self tests at the end of each lesson – avoiding the computer-generated monotone entirely. By simply recording all of the demonstrations using a human voice (as most of the module introductions are) then a vast improvement could be made.

The ugly?

Then there are the animations – which are at best ugly and at worst confusing. Watching icons appear and disappear in a manner which at times appeared to be random whilst the computer was talking to me did not help at all. In the end, I nearly always resorted to reading the transcript.

Whilst the animations may be a design crime (as are many of the diagrams in Microsoft Official Curriculum courseware) even worst was the inaccuracy of some of the information presented – which shows it was produced by an outside agency (Element K) and sometime suffers from a lack of technical quality assurance.

Let me give some examples:

  • Course 6519: one of the self-tests at the end of a lesson claims that NT 4.0 supports Kerberos (for that I would need Windows 2000 or later); and in the context of Active Directory database mounting, the module claims that one should “use a line printer daemon (LPD) utility, such as Active Directory Users and Computers, to view the data” (clearly LPD should have been LDAP…).
  • Course 6521: one of the reviews claims that only Active Directory Lightweight Directory Services uses an extensible storage engine (ESE) for its database store – contradicting the text elsewhere in the module (as well as being incorrect); and a self test asked me to “identify the feature that AD LDS supports but AD LDS does not support” (!).
  • Course 6524: .PIX files referred to in the text, whilst the demonstrations clearly showed that the extension is .PFX.
  • Course 6536: claims that “Hyper-V is supported only by the Windows Server 2008 Standard 64-bit edition” (64-bit yes, standard edition only – certainly not).
  • Course 6169: claims that “The various wireless networking standards are 802.11, 802.11b, 802.11a, 802.11g, 802.1X, and 802.11n” (802.1x is used for implementing network security but is not specifically a wireless networking standard).

There are typos too (sever instead of server, yes and no the wrong way around in test answers, etc.) as well as references to product names that have not existed since beta versions of Windows Server 2008 (e.g. Windows Server Virtualization). Other beta information has not been refreshed either – course 6529 refers to a 30-day grace period before Windows enters reduced functionality mode when it is actually 60 days (and RFM is much less brutal today than it was in the original versions of Windows Vista and early Windows Server 2008 betas). In another place, virtual machine (VM) snapshots are mixed up with volume shadow service (VSS) snapshots as the course suggests that VM snapshots are a backup and recovery solution (they most certainly are not!).

I could go on, but you get the message – almost every module has at least one glaring error. Mistakes like this mean that I cannot be 100% certain that what I have learned is correct – for that matter, how do I know that the Microsoft examinations themselves are not similarly flawed?

Summary

In the end, I don’t think it was just these courses that helped me pass the exams. Boot camps (and that’s what intense online training is the equivalent of) are all very well to cram information but they are no substitute for knowledge and experience. The outcome of running through these courses was a combination of:

  • Refreshing long-forgotten skills and knowledge on some of the lesser-used functionality in Windows Server 2008.
  • Updating skills for new features and functionality in Windows Server 2008.

Without several years’ of experience using the products I doubt that I would have known all of the answers to the exam questions – indeed I didn’t know them all (but the knowledge gained from the online training helped me to evaluate and assess the most likely of the presented options).

So, is this training worth it? Probably! Is it a complete answer to exam study and preparation? Possibly – but not through cramming 100 hours of training into a couple of weeks and expecting to retain all the knowledge. What these Microsoft e-Learning courses represent are a low cost substitute to formal, instructor-led classes. There are some downsides (for example, the lack of interaction and the poor quality control – instructor-led courses benefit from the feedback that the instructors provide to allow improvements at each revision) but they are also self-paced and the ability to go at my own speed means that, given sufficient time, I could work through a few of these each week and allow time for the knowledge to settle, backed up with some real-world experience. On that basis, they’re certainly worthy of a look but don’t expect them to provide all of the answers.

If you want to try one of the Microsoft E-Learning courses there are plenty available discounted (or even free). Afterwards, I’d be interested to hear what you think.

Passed Microsoft Certified IT Professional exam 70-647


That’s it. Done it! I’ve just passed the last exam I needed to take (70-647) in order to update my MCSE on Windows Server 2003 to MCITP: Enterprise Administrator for Windows Server 2008, before the vouchers I had for free exams expired and just in time for Christmas!

For anyone else thinking of upgrading the Microsoft certifications for Windows Server 2008, then check out the post I wrote last year on Microsoft Learning and plans for Windows Server 2008 certification.

There’s also a PDF available which shows the various transition paths from earlier certifications.