Microsoft Ignite | The Tour: London Recap

This content is 6 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

The Windows Network Connection Status Icon (NCSI)

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last night, whilst working in the Premier Inn close to the office, I noticed the browser going to an interesting URI after I connected to the hotel Wi-Fi.  That URI was http://www.msftconnecttest.com/redirect and a little more research tells me it’s used by Windows 10 to detect whether the PC has an Internet connection or not.

The feature is actually the Network Connection Status Icon (NCSI) and, more accurately, the URIs used are:

The URI I saw actually redirects to MSN whereas the ones above return static text to indicate a successful connection.

For those who want to know more, there’s a detailed technical reference on TechNet, which dates back to Windows Vista and an extensive blog post on the Network Connection Status Icon.

Short takes: deleting bit.ly Bitlinks; backing up and restoring Sticky Notes; accessing cmdlets after installing Azure PowerShell

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another collection of short notes to add to my digital memory…

Deleting bit.ly links

Every now and again, I spot some spam links in my Twitter feed – usually prefixed [delicious]. That suggests to me that there is an issue in Delicious or in Twitterfeed (the increasingly unreliable service I use to read certain RSS feeds and tweet on my behalf) and, despite password resets (passwords are so insecure) it still happens.

A few days ago I spotted some of these spam links still in my bit.ly links (the link shortener behind my mwil.it links, who also own Twitterfeed) and I wanted to permanently remove them.

Unfortunately, according to the “how do I delete a Bitlink” bit.ly knowledge base article – you can’t.

Where does Windows store Sticky Notes?

Last Friday (the 13th) I wrote about saving my work before my PC was rebuilt

One thing I forgot about was the plethora of Sticky Notes on my desktop so, today, I was searching for advice on where to find them (in my backup) so I could restore.

It turns out that Sticky Notes are stored in user profiles, under %appdata%\Microsoft\Sticky Notes, in a file called StickyNotes.snt. Be aware though, that the folder is not created until the Sticky Notes application has been run at least once. Restoring my old notes was as easy as:

  1. Run the Sticky Notes desktop application in Windows.
  2. Close Sticky Notes.
  3. Overwrite the StickyNotes.snt file with a previous copy.
  4. Re-open Sticky Notes.

Azure PowerShell installation requires a restart (or explicit loading of modules)

This week has involved a fair amount of restoring tools/settings to a rebuilt PC (did I mention that mine died in a heap last Friday? If only the hardware and software were supplied by the same vendor – oh they are!). After installing the Azure PowerShell package from the SCCM Software Center, I found that cmdlets returned errors like:

Get-AzureRmResource : The term ‘Get-AzureRmResource’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

After some RTFMing, I found this:

This can be corrected by restarting the machine or importing the cmdlets from C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\ as following (where XXXX is the version of PowerShell installed[)]: import-module "C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\azure.psd1" import-module "C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\expressroute\expressroute.psd1"

Short takes: Unicode characters in Windows; OS X Remote Disc goes AWOL

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

More micro-posts from the collection of open tabs in my browser…

Unicode characters in Windows

Sometimes, when tweeting, it’s useful to be able to type the unicode horizontal ellipsis (…) rather than three full stops (…). It might look similar, but that’s two less characters out of 140.  I remember back in early days of Windows I could enter special characters using the numeric keypad but it seems that still works (sort of): FireFormat.Info has some useful information on entering Unicode characters in Microsoft Windows.

Mac OS X Remote Disc goes AWOL whilst installing Adobe Lightroom

My new Mac Mini doesn’t have an optical drive. That’s not generally a problem except I needed to install Lightroom on it, so I used OS X’s Remote Disc technology to share the DVD drive from my old MacBook across the network.  The software installation was progressing nicely until, right at the end, the Adobe installer wanted me to insert the disc! As I was already connected to a logical disc, I had no way forward but to abandon the installation, connect a USB DVD drive and try again.  Seems it’s not the universal solution to accessing optical media that I had hoped…

To add insult to injury, I then found (thanks to the Lightroom Queen) that the Lightroom downloads on the Adobe website are the full programme, so I could have downloaded the software and installed it locally – all I really needed was my license key!

Accessing my iCloud photostream from a Windows PC

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I use a lot of Apple products and, not surprisingly, when iOS5 was released, I upgraded my iPhone and my iPad. One of the big advancements with iOS5 is the integration with iCloud, Apple’s cloud service for synchronising data between devices so, when I took a look a few days later I was a bit confused. From a Windows PC I logged in and saw links for Mail, Contacts, Calendar, Find My iPhone and iWork – all with familiar icons but I couldn’t fathom is where my photostream is. Certainly not visible in iCloud…

It turns out that there is a separate application needed to sync an iCloud photostream with a Windows PC. I installed it, it crashed (something to do with being behind our proxy servers at work, I think) but after a PC reboot and connection to my home network, photos from my iOS devices started showing up in the %userprofile%\My Pictures\Photo Stream\My Photo Stream folder.  The iCloud Control Panel for Windows also integrates with Safari 5.1.1 or Internet Explorer 8 bookmarks and with Outlook 2007 Contacts and Calendars).

All I need now is the ability to sync ActiveSync contacts from my iPad (the ones I have in Office 365)… I wish.

An alternative enterprise desktop

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I participated in an online event that looked at the use of Linux (specifically Ubuntu) as an alternative to Microsoft Windows on the enterprise desktop.

It seems that every year is touted as the year of Linux on the desktop – so why hasn’t it happened yet? Or maybe 2011 really is the year of Linux on the desktop and we’ll all be using Google Chrome OS soon. Somehow I don’t think so.

You see, the trouble with any of the “operating system wars” arguments is that they miss the point entirely. There is a trilogy of people, process and technology at stake – and the operating system is just one small part of one of those elements. It’s the same when people start to compare desktop delivery methods – thick, thin, virtualised, whatever – it’s how you manage the desktop it that counts.

From an end user perspective, many users don’t really care whether their PC runs Windows, Linux, or whatever-the-next-great-thing-is. What they require (and what the business requires – because salaries are expensive) is a system that is “usable”. Usability is in itself a subjective term, but that generally includes a large degree of familiarity – familiarity with the systems that they use outside work. Just look at the resistance to major user interface changes like the Microsoft Office “ribbon” – now think what happens when you change everything that users know about using a PC. End users also want something that works with everything else they use (i.e. an integrated experience, rather than jumping between disparate systems). And, for those who are motivated by the technology, they don’t want to feel that there is a two tier system whereby some people get a fully-featured desktop experience and others get an old, cascaded PC, with a “light” operating system on it.

From an IT management standpoint, we want to reduce costs. Not just hardware and software costs but the costs of support (people, process and technology). A “free” desktop operating system is just a very small part of the mix; supporting old hardware gets expensive; and the people costs associated with major infrastructure deployments (whether that’s a virtual desktop or a change of operating system) can be huge. Then there’s application compatibility – probably the most significant headache in any transformation. Yes, there is room for a solution that is “fit for purpose” and that may not be the same solution for everyone – but it does still need to be manageable – and it needs to meet all of the organisation’s requirements from a governance, risk and compliance perspective.

Even so, the days of allocating a Windows PC to everyone in an effort to standardise every single desktop device are starting to draw to a close. IT consumerisation is bringing new pressures to the enterprise – not just new device classes but also a proliferation of operating system environments. Cloud services (for example consuming software as a service) are a potential enabler – helping to get over the hurdles of application compatibility by boiling everything down to the lowest common denominator (a browser). The cloud is undoubtably here to stay and will certainly evolve but even SaaS is not as simple as it sounds with multiple browser choices, extensions, plug-ins, etc. If seems that, time and time again, it’s the same old legacy applications (generally specified by business IT functions, not corporate IT) that make life difficult and prevent the CIO from achieving the utopia that they seek.

2011 won’t be the year of Linux on the desktop – but it might just be the year when we stopped worrying about standardisation so much; the year when we accepted that one size might not fit all; and the year when we finally started to think about applications and data, rather than devices and operating systems.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Hardware specific application targeting with MDT 2010

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Guest Post
[Editor’s note: this post was originally published on Garry Martin’s blog on 28 October 2009. As Garry’s closing down his own site but the content is still valid, he asked me if I’d like to post it here and I gratefully accepted!]

I’m running a Proof of Concept (PoC) at work at the moment which is making use of Microsoft Deployment Toolkit (MDT) 2010. Whilst most of the drivers we need can be managed by using the Out-of-Box Drivers Import function, some are delivered by the OEM as .EXE or .MSI packages. Whilst we could use multiple Task Sequences to manage these, or even select the applications individually at build time, our preference was to use some sort of hardware specific targeting.

Process

First of all, we needed to uniquely identify the hardware, and for this purpose we used the Plug and Play (PnP) Device ID, or hardware ID as it is sometimes called.

To determine the hardware IDs for a device by using Device Manager:

  1. Install the device on a test computer
  2. Open DeviceManager
  3. Find your device in the list
  4. Right-click the entry for your device, and then click Properties
  5. In the Device Properties dialog box, click the Details tab
  6. In the Property list, click Hardware Ids
  7. Under Value, make a note of the characters displayed. They are arranged with the most specific at the top to the most general at the bottom. You can select one or more items in the list, and then press CTRL+C to copy them to the clipboard.

In our case, the Sierra Wireless MC8755 Device gave us USB\VID_1199&PID_6802&REV_0001 as the most specific value and USB\VID_1199&PID_6802 as the least specific, so we made a note of these before continuing.

Next, we downloaded the Sierra Wireless MC87xx 3G Watcher .MSI package from our notebook OEM support site. Sierra Wireless have instructions for performing a silent install of the 3G Watcher package, so we used those to understand the installation command we would need to use.

So, we had a unique ID for targeting, the installation package, and the installation command line we would need to use. Now we needed to configure MDT to deploy it. First, we create a New Application.

  1. In the MDT 2010 Deployment Workbench console tree, right-click Applications, and click New Application
  2. On the Application Type page, click Next to install an application and copy its source files to the deployment share
  3. On the Details page, type the application’s name in the Application Name box, and click Next
  4. On the Source page, type the path or browse to the folder containing the application’s source files, and click Next
  5. On the Destination page, click Next to use the default name for the application in the deployment share
  6. On the Command Details page, type the command you want to use to install the application, and click Next. We used the following
    msiexec.exe /i 3GWatcher.msi /qn
  7. On the Summary page, review the application’s details, and click Next
  8. On the Confirmation page, click Finish to close the New Application Wizard.

Next we modify the Task Sequence and create our query.

  1. In the MDT 2010 Deployment Workbench console tree, click Task Sequences
  2. In the details pane, right-click the name of the Task Sequence you want to add the Application to, and then click Properties
  3. In the Task Sequence Properties dialog box, click the Task Sequence tab
  4. Expand State Restore and click on Install Applications
  5. Click the Add button, and select General, then Install Application
  6. On the Properties tab for Install Application, type the application’s name in the Name box, and click the Options tab
  7. On the Options tab, click the Add button and select If statement
  8. On the Source page, type the path or browse to the folder containing the application’s source files, and click Next
  9. In the If Statement Properties dialog box, ensure All Conditions is selected and click OK
  10. On the Options tab, click the Add button and select Query WMI

This is where we’ll now use a WMI query that will provide our Hardware Specific Application Targeting. You’ll need to modify this for your particular hardware, but we previously discovered that our least specific Device ID value was USB\VID_1199&PID_6802 so we will use this to help form our query.

  1. In the Task Sequence WMI Condition dialog box, ensure the WMI namespace is root\cimv2 and type the following in the WQI Query text box, clicking OK when finished:
    SELECT * FROM Win32_PNPEntity WHERE DeviceID LIKE '%VID_1199&PID_6802%'
  2. Click OK to exit the Task Sequences dialog box

And that’s it. When you deploy a computer using the modified Task Sequence, the WMI query will run and, if matched, install the application. If a match can’t be found, the application won’t be installed. Hardware Specific Application Targeting in a nutshell.

Keeping Windows alive with curated computing

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Like it or loath it, there’s no denying that the walled garden approach Apple has adopted for application development on iOS (the operating system used for the iPhone, iPad and now new iPods) has been successful. Forrester Research talk about this approach using the term “Curated Computing” – a general term for an environment where there is a gatekeeper controlling the availability of applications for a given platform. So, does this reflect a fundamental shift in the way that we buy applications? I believe it does.

Whilst iOS, Android (Google’s competing mobile operating system) and Windows Phone 7 (the new arrival from Microsoft) have adopted the curated computing approach (albeit with tighter controls over entry to Apple’s AppStore) the majority of the world’s computers are slightly less mobile. And they run Windows. Unfortunately, Windows’ biggest strength (its massive ecosystem of compatible hardware and software) is also its nemesis – a whole load of the applications that run on Windows are, to put it bluntly, a bit crap!

This is a problem for Microsoft. One the one hand, it gives their operating system a bad name (somewhat unfairly, in my opinion, Windows is associated with it’s infamous “Blue Screen of Death” yet we rarely hear about Linux/Mac OS X kernel panics or iOS lockups); but, on the other hand, it’s the same broad device and application support that has made Windows such a success over the last 20 years.

What we’re starting to see is a shift in the way that people approach personal computing. Over the next few years there will be an explosion in the number of mobile devices (smart phones and tablets) used to access corporate infrastructure, along with a general acceptance of bring your own computer (BYOC) schemes – maybe not for all organisations but for a significant number. And that shift gives us the opportunity to tidy things up a bit.

Remove the apps at the left side of the diagram and only the good ones will be left...A few weeks ago, Jon Honeyball was explaining a concept to me and, like many of the concepts that Jon puts forward, it makes perfect sense (and infuriates me that I’d never looked at things this way before). If we think of the quality of software applications, we can consider that, statistically, they follow a normal distribution. That is to say that, the applications on the left of the curve tend towards the software that we don’t want on our systems – from malware through to poorly-coded applications. Meanwhile, on the right of the curve are the better applications, right through to the Microsoft and Adobe applications that are in broad use and generally set a high standard in terms of quality.  The peak on the curve represents the point with the most apps – basically, most application can be described as “okay”. What Microsoft has to do is lose the leftmost 50% of applications from this curve, instantly raising the quality bar for Windows applications. One way to do this is curated computing.

Whilst Apple have been criticised for the lack of transparency in their application approval process (and there are some bad applications available for iOS too), this is basically what they have managed to achieve through their AppStore.

If Microsoft can do the same with Windows Phone 7, and then take that operating system and apply it to other device types (say, a tablet – or even the next version of their PC client operating system) they might well manage to save their share of the personal computing marketplace as we enter the brave new world of user-specific, rather than device-specific computing.

At the moment, the corporate line is that Windows 7 is Microsoft’s client operating system but, even though some Windows 7 tablets can be expected, they miss the mark by some way.

Time after time, we’ve seen Microsoft stick to their message (i.e. that their way is the best and that everyone else is wrong), right up to the point when they announce a new product or feature that seems like a complete U-turn.  That’s why I wouldn’t be too surprised to see them come up with a new approach to tablets in the medium term… one that uses an application store model and a new user interface. One can only live in hope.

Installing Windows from a network server without Windows Deployment Services

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’d like to start this post with a statement:

Windows Deployment Services (WDS) is a useful role in Windows Server 2008 R2.  It’s free (to licensed Windows users), supports multitasking, and is a perfectly good method of pushing Windows images to clients…

Unfortunately that statement has a caveat:

… but it needs to be installed on an Active Directory-member computer.

For some, that’s a non-starter.  And sometimes, you just want a quick and dirty solution.

I have a small dedicated server at home to run Active Directory along with basic network services (DNS, DHCP, etc.) for my home IT.  I also run Philippe Jounin’s excellent TFTP Daemon (service edition) on it in order to support image loads on my Cisco 7940 IP Phone.

In order to rebuild the Hyper-V server that I use for infrastructure test and development, I wanted to boot across the network and install Windows Server 2008 R2 – and a few days ago I found Mark Kubacki’s post about TFTPd32 and DHCP Server – Windows Deployment Services without WDS. Perfect!  No need to install another role on my little Atom-powered server – particularly as, once this server is built, I’ll probably install WDS on it  to deploy images to my various test virtual machines!

So, this is the process – with thanks to Mark Kubacki, and to Ryan T Adams (who wrote about installing Vista without a CD Drive using TFTP – for instance, installing Windows on a netbook) who were gracious enough to blog about their experiences and give me something to build upon:

  1. Download tftpboot.exe from Ryan T Adams’ site and run it to extract the contents to a suitable hard drive location (i.e. the TFTP root folder).  Unfortunately, you probably won’t need most of this 154MB download (more on that in a moment) but it will get you started.
  2. Start tftpd32.exe (or copy the files to your TFTP root, if you are already running a TFTP service, as I was) and add tftpd32.exe (or tftpd32_svc.exe) as a Windows Firewall exception (you could just disable the firewall but I don’t recommend that approach).
  3. Either set TFTPD32 to act as a DHCP server and specify the boot file options (as Ryan describes), or configure DHCP options 066 and 067 (boot server host name and boot file name) on another DHCP server (Mark shows how to do this for the Windows DHCP Server role) using the IP address of the TFTP server and the boot file name of boot\pxeboot.com.
  4. Make sure that the TFTP Server is set to include PXE capability in the advanced TFTP options and that it’s DHCP Server capability is turned off if you are using another DHCP server.
  5. Restart the TFTP Server (or service) to pick up the configuration changes.
  6. Boot a computer (or virtual machine) from its network card, press F12 when prompted and wait for Windows PE to load, then map a drive to another machine on the network which is sharing the Windows media (I use Slysoft Virtual Clone Drive to mount an operating system ISO file and I’ve shared the virtual drive).
  7. Switch to the newly mapped drive and type setup.exe to run Windows Setup.

Unfortunately, the version of the Windows Preinstallation Environment (Windows PE) that Ryan has supplied in tftpboot.exe is a 32-bit version (of Windows PE 2.0, I think).  When I tried to use this to install Windows Server 2008 R2 (which is 64-bit only), I was greeted with the following message:

This version of Z:\setup.exe is not compatible with the version of Windows you’re running.  Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

I needed a 64-bit version of Windows PE.  No problem.  That’s included in the Windows Automated Installation Kit (WAIK), so I overwrote Ryan’s winpe.wim with the one from %programfiles%\Windows AIK\Tools\PETools\amd64, and restarted the computer I wanted to build.  This time Windows Setup ran with no issues and Windows Server was installed successfully.

Even though I used the TFTPD32, this method could be used to install Windows from just about any TFTP server (it could even be running on totally different operating system, I guess), or even to load another WIM file (i.e. not Windows PE) from a network boot. I’m sure if I had more time I could come up with all sorts of scenarios (boot Windows directly from the network?) but, for now, I’ll stick to using this method as a WDS replacement.

Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.