Useful Links: November 2009

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Building a low-power server for 24×7 infrastructure at home: Part 2 (assembly and initial configuration)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday I wrote about how I’d been looking to create a server that didn’t consume too much power to run my home infrastructure and finally settled on a mini-ITX solution.  This post continues the theme, looking at the assembly of the unit, installation of Windows Server, and finally, whether I achieved my goal of building a low-power server.

As I commented previously, it’s been at least 10 years since I built a PC from scratch and it’s still a minefield of connectors and components.  I took the Travla C158 case and Intel D945GCLF2 board that I had purchased and added 512MB of DDR2 RAM and a 250GB Seagate Barracuda (ST3250620NS) that were not being used in any of my other machines.  I didn’t fit an optical drive, electing to use a USB-attached one for setup (more on that in a moment) and the case also has a slot for a card reader, which I really should consider filling (or blanking off).

With all the components ready, this is the process I followed:

  1. Open the top cover of the case.
  2. Remove the media drive and hard drive holders.
  3. Fix the hard disk to its holder and refit.
  4. Fit the gasket that surrounds the various ports (supplied with the motherboard) to the case
  5. Fit the motherboard and PCI riser.
  6. Fit a blanking plate for the (unused) PCI card slot.
  7. Install some DDR2 memory in the motherboard’s single memory slot.  Unfortunately the module that I used does not have a low-enough profile to allow the media drive holder to be refitted, so I’ll be looking for some more (512MB isn’t much for a modern operating system anyway).
  8. Connect the case fan to the jumper on the motherboard.
  9. Connect the side panel audio ports to the motherboard (the labelling on the connectors did not match Intel’s instructions for the motherboard but I followed Gabrielle Torres’ Hardware Secrets article on installing frontal audio plugs – sound is not really a concern for me on a server).
  10. Connect the front panel connectors to the motherboard, using the pattern shown in the instructions (noting that the case I selected doesn’t have a reset button, so pins 5 and 7 are not connected)
  11. Connect the side panel USB ports to the motherboard (single jumper).
  12. Connect both the power jumpers (2×2 and 2×10) to the motherboard.
  13. Connect the SATA hard drive power and data cables (the data cable was supplied with the motherboard, along with an IDE cable that I did not use)
  14. Install the mounting kit, ready to fix the PC to the wall of my office (I also considered placing it in the void between the downstairs ceiling and upstairs floor… but decided it wasn’t really necessary to bury the machine inside the fabric of the house!).
  15. Check that the BIOS configuration jumper block is set to pins 1 and 2 (normal) and refit the top of the case, then boot the PC.
  16. Press F2 to enter the BIOS configuration utility and change the following values:
    • Set the date and time (on the Main screen).
    • Under Boot Configuration on the Advanced screen, enable the System Fan Control.
    • On the Power screen, set the action After Power Failure to Power On.
    • On the Boot screen, ensure that Boot USB Devices First is enabled.
  17. Connect a DVD drive and boot from a Windows setup DVD.

I did manage to boot from my DVD drive once; however I had left the wrong DVD in the drive and so I rebooted.  After rebooting I was unable to get the PC to boot from the external DVD drive (a Philips SPD3900T).  I tried different USB ports, I changed BIOS options, I even reset the BIOS jumper to pins 2 and 3 (which provides access to some extra settings in the BIOS) but nothing worked, so I configured a USB thumb drive to install Windows Server 2008 R2 and that booted flawlessly.  I later found that Windows didn’t recognise the DVD drive until I had reset its power (which may also have resolved my issues in a pre-boot environment); however it’s all a bit odd (I hadn’t previously experienced any issues with this external DVD drive), and I do wonder if my motherboard has a problem booting from USB-attached optical media.

The Windows Server setup process was smooth, and all of my devices were recognised (although I did need to set the screen resolution to something sensible, leaving just the configuration of the operating system and services (adding roles, etc.).

With Windows Server 2008 R2 running, I decided to take a look at the power usage on the server and it seems to tick over at around 35W.  That’s not as low as I would like (thanks to the Intel 945GC chipset – the CPU itself only needs about 8W) but it’s a lot better than running my Dell PowerEdge 840 all day.  There are some other steps I can take too – I could potentially reduce hard disk power consumption by replacing my traditional hard drive with an SSD as the the Barracuda pulls about 9W idle and 12W when seeking (thanks to Aaron Parker for that suggestion).  It may also be that I can do some work with Windows Server to reduce it’s power usage – although putting a server to sleep is probably not too clever!  A brief look at the energy report from powercfg.exe -energy indicates that the USB mass storage device may be preventing processor power management from taking place – and sleep is disabled because I’m using a standard VGA driver (vgapnp.sys).  Microsoft has written a white paper on improving energy efficiency and managing power consumption in Windows Server 2008 R2 and this blog post from the Windows Server Performance team looks at adjusting processor P-states.  It may be some time before I reach my nirvana of a truly low-power infrastructure server, but I’ll write about it if and when I do – and 35W is still a lot better than 100W.

Building a low-power server for 24×7 infrastructure at home: Part 1 (hardware selection)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of years back, I bought myself a small server with the intention that it would replace many of the various PCs I was using at home.  As a result, I did decommission some older hardware and it allowed me to consolidate the infrastructure I use for my home office network, testing and development onto one Hyper-V host, supplemented by an assortment of client devices (my netbook, my kids’ netbook, my wife’s laptop, the living room Mac, my MacBook, my work notebook) and a couple of Netgear ReadyNAS Duos (one as always-on storage, the other as an iSCSI target).  The trouble is that, even though the clients may be shutdown or in a low-power state when not in use, the server runs 24×7 consuming something in the region of 100W of power – generally just so that one virtual machine is available with my DHCP, DNS, TFTP (for IP phone configuration) and basic web services (e.g. phone directory).

I decided to build a low-cost, low-power, PC to run as a small server and provide these services, along with Active Directory.  After reading Jeff Atwood’s post on ultra-low-power PCs, I was inspired; however I wasn’t sure that I’d get Windows Server running on a non-Intel chip (it might be possible, but I wasn’t going to risk it) and even though Linux could potentially offer the rest of the services I needed, a lot of my work uses Microsoft software, and I wanted to run Active Directory.

I tweeted to see if anyone had any advice for me – and there were some good suggestions – including Garry Martin’s identification of an Asus Aspire Revo nettop for around £144 at eBuyer.  That got me thinking: I wasn’t sure about the Revo’s form factor – but if I could get myself a Mini-ITX wall-mountable case and suitable motherboard, then use a spare disk and memory from my “box of PC bits”, I could probably build something for a similar price.

I should explain that I haven’t “built” a PC for about 10 years (maybe more) because there are so many decent OEM deals around but I started to look at what was available at mini-itx.com (as highlighted in Jeff’s post).  I did also consider a Fit PC 2 (low power and Intel-based) but it uses the Z-series Atom processors, which provide Intel VT (great for Windows Virtual PC) but are not 64-bit so are not suitable for Windows Server 2008 R2 (or for Hyper-V – although virtualisation is not something I’m looking to do with this server).  Eventually, I settled on a Travla C158 case coupled with an Intel D945GCLF2 board which has an Intel Atom 330 giving me a dual-core x64 CPU (actually it’s like two Atom N270s on the same socket) with hardware DEP (but not VT).Intel Atom inside  There were other options with lower power consumption, but they would have involved using non-Intel CPUs (and I was unable to confirm that Windows Server will run on a Via Eden (for example); however this Atom 330/Intel D945GCLF2 board combination is still providing plenty of computing power for my purposes (William Henning’s review on neoseeker is worth a read).

I ordered the equipment yesterday evening at 19:00 (the mini-itx.com site says “place UK orders before 7.30PM GMT for same-day despatch”) and a courier delivered it to me before 11 today; however I did end up buying some power connectors because it wasn’t clear whether they would be required or not (I left a comment in the order notes to remove them if not required but that was ignored) and there was some confusion because I received order and stock confirmations last night but the dispatch notice wasn’t sent until today (making me think I’d missed the next-day slot).  Even with the communications issues, that sort of turnaround is fantastic – many online stores need orders by mid-afternoon for same-day dispatch.

With the hardware in place, the next step was assembly – I’ll save that for a future blog post (along with some real values for power consumption) whilst I turn my attention to configuring the software…

Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.

Maintaining a common user profile across different Windows versions

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I wish I could take the credit for this, but I can’t: last week one of my colleagues (Brad Mallard) showed me a trick he has for creating a single user profile for multiple Microsoft operating systems. Michael Pietroforte wrote about the different user profile formats for Windows XP and Vista back in 2007 but Brad’s tip takes this a step further…

Using Group Policy Preferences, Brad suggests creating a system variable to record the operating system version for a given client computer (e.g. %osversion%) and assign it to the computer account. Then in Active Directory Users and Computers (ADUC/dsa.msc), set the user’s profile path to \\servername\sharename\%username%.%osversion%. ADUC will resolve the %username% portion but not the %osversion% part so what remains will be something like \\bigfileserver\userprofiles\mark.wilson.%osversion%.

Using this method, one user can hotdesk between several locations with different desktop operating systems (e.g. Windows XP and Windows 7). Each time they log on to a machine with a different operating system, a new profile will be created in a subfolder of their user name. Technically, that’s two profiles – but at least they are in one location for management purposes. Combine this with folder redirection for documents, IE favorites, etc. and it should be possible to present a consistent view between two operating system releases.

Mark Russinovich explains “the machine SID duplication myth”

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my colleagues just flagged a blog post I’d been meaning to read when I have a little more time from Microsoft (ex-SysInternals) Technical Fellow Mark Russinovich in which he discusses “the machine SID duplication myth“. It seems that all of the effort we put into de-duplicating SIDs on Windows NT-based systems (NT, 2000, XP, 2003, Vista, 2008, 7 and 2008 R2) over the years was not really required…

To be honest, I don’t think anyone ever said it was required – just that having multiple machines with the same security identifier sounded like a problem waiting to happen and that generating unique SIDs was best practice.

The full post is worth a read but, in summary, the new best practice is:

“Microsoft’s official policy on SID duplication will also now change and look for Sysprep to be updated in the future to skip SID generation as an option. Note that Sysprep resets other machine-specific state that, if duplicated, can cause problems for certain applications like Windows Server Update Services (WSUS), so Microsoft’s support policy will still require cloned systems to be made unique with Sysprep.”

As you were then…

Windows native boot from VHD roundup

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This is the first of several planned posts based on knowledge gained at Tech·Ed last week – but this one is necessarily brief. Mark Minasi, who presented the session that this content is based on, owns the copyright on the materials he presented (although Microsoft still distributed them to delegates). Consequently, I can’t write his session up as fully as I would like; however this post captures some of the key points (along with some narrative of my own) as I see nothing that’s not already in the public domain (and some of which has already been written about on this blog). The value in Mark’s presentation was that it pulled together various items of information into one place and explained it in a way that was simple to follow – consequently I’m not repeating the full details, just the high level overview, with some extra links where I feel they add value (Mark seems like a decent fellow – he’s only trying to protect his income and I suspect the real problem would be if I presented his materials as my own – I’m sure he would understand the fine line I’m attempting to walk here):

  • The session was titled “How Windows Storage is Changing: Everything is going VHD (CLI302)” and that’s pretty spot on – the virtual hard disk (.VHD) file format allows an entire disk partition (but not a whole drive with multiple partitions) to be packaged in a single file complete with folder structure and NTFS permissions: Microsoft’s Storage Server uses .VHD files for iSCSI targets; Windows Backup has been able to perform completed PC backups to .VHD files since Vista; and with Windows 7 we have the ability to natively boot Windows from a VHD file. Just to be clear – this is not client/server virtualisation (as in with a hypervisor) – this is storage virtualisation (presenting the VHD container as as a logical volume, stored on a physical disk).
  • To understand native .VHD booting, it’s useful to understand recent changes in the boot process: boot.ini is no more – instead we have a Boot Configuration Database (BCD) and a system reserved partition (incidentally, that’s the same one that is used for BitLocker, and is automatically created in Windows 7, with no drive letter assigned).
  • Running Windows Backup from the command line with wbadmin.exe requires the use of the -allcritical switch to ensure that the System Reserved partition is backed up.
  • As Mike Kolitz described back in May, access to .VHD file contents from Windows 7 and Server 2008 R2 is provided by a completely new mini-port driver in the storage stack for VHD files. This VHD driver enables requests to files in the VHD to be sent to the host NTFS file system on the physical partition where the VHD file is located. VHD operations can also be performed on a remote share.
  • The steps for creating a .VHD file, attaching (mounting) it, assigning a drive letter and formatting the volume can be found in my previous post on running Windows from a USB flash drive (as well as elsewhere on the ‘net).
  • The diskpart.exe command can be used to view the details of the VHD once mounted (detail disk) and it will be identified as a Msft Virtual Disk SCSI Disk Device.
  • The System Reserved Boot Partition may populated using the bcdboot.exe command. After this partition has been created, the remainder can be partitioned and formatted, then a pre-configured .VHD can be copied to the second (non-system) partition. After editing the BCD and rebooting, the physical drive will be something like D: or E: (depending on the presence of optical drives) and the .VHD will be C:.
  • There are various methods for creating a pre-configured .VHD, including booting a reference PC from Windows PE and using imagex.exe (from the Windows Automated Installation Kit) to capture the disk contents to a .WIM file, then mounting the target .VHD and deploying the .WIM image to it. Alternatively, there is a SysInternals tool called Disk2VHD.
  • The changes to the BCD are also documented in a previous post on this site but Mark also highlighted the existence of the [locate] parameter instead of specifying a drive manually (James O’Neill uses it in his post on booting from VHD and the joy of BCDEdit).
  • There are GUI tools for editing the BCD, but bcdedit.exe is worth getting to know:

    “The GUI BCDEdit commands are rather like having a 3 metre ladder for a 5 metre wall” … “Step into the light, come to the command line, in the command line there is freedom my friends.”

    [Mark Minasi at TechEd Europe 2009]

  • Booting from VHD is a great feature but it does have its limitations: for instance I can’t use it on my everyday notebook PC because the current release doesn’t support hibernation or BitLocker.
  • To finish up his presentation, Mark demonstrated an unsupported method for installing Windows directly to .VHD: Run Windows setup and press shift and F10 to break out into a command prompt; wipe and partition the hard drive, creating and attach a new .VHD; ignore Windows setup’s protests that it can’t be installed to the .VHD – click the Next button anyway and it should work (although it may be patched in a future release).

Finally, if the contents of this post are interesting, this blog recently featured two guest posts from my friend and colleague, Garry Martin that build on the concepts described above: in the first post, Garry described the process for booting Windows 7 from VHD on a Windows XP system; the second went deep into an unsupported, but nevertheless useful, method for booting Windows 7 or Server 2008 R2 from a VHD on removable media… perhaps a USB flash drive? There are also some useful links in Mike Ormond’s post on native VHD booting and Jon Galloway has a whole bunch of tips even if he is still searching for his virtual machine nirvana.

Tech·Ed Europe 2009

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Tech·Ed logoThose who follow me on Twitter may have noticed that I’ve spent the last week at Microsoft’s European technical education conference – Tech·Ed Europe – which was held in Berlin this year.

Mauerfall celebrationsIt was a great week to be in Berlin as it co-incided with Germany’s celebrations for the 20th anniversary of the fall of the Berlin Wall (die Mauerfall) and I was lucky enough to be at the Brandenburg Gate, standing in the rain, a couple of hundred metres away from political heavyweights past and present, watching a line of 1000 “dominos” tumbling to signify the fall of the wall. I don’t want to give the impression that Tech·Ed is just a jolly though – actually it’s far from it and I spent half my weekend travelling to get there, before attending sessions from 9am to around 7pm most days and then networking in the evenings. This was my first Tech·Ed since 2001, for various family and business reasons, and it was both tremendously rewarding and very hard work.

Tech·Ed BadgeFirstly, I should try and give some indication of the size of the event: more than 7200 people spread over several halls in a convention centre; more than 110 partners in the exhibition hall; hundreds of Microsoft staff and volunteers in the Technical Learning Center; around 600 sessions in something like 20 session rooms – only 21 sessions of which can fit into the agenda; a keynote with seating for all 7200 people; catering for everyone too (including the 460 staff); and a lot of walking to/from sessions and around the centre.

So, what sort of content is covered in the sessions? This year Tech·Ed had a mixture of IT Pro and Developer content but over the years it’s been held as separate developer and IT Pro events on consecutive weeks – and, if I go back far enough, there used to be a separate IT Pro conference (the Microsoft Exchange Conference, later renamed IT Forum). This year there didn’t seem to be as much for coders at Tech·Ed, but they have a Professional Developer Conference (PDC) in Los Angeles next week; web developers have their own conference too (MIX); and, if IT management is more your forte, the Microsoft Management Summit (MMS) is intended for you. Microsoft’s description of Tech·Ed is as follows:

Tech·Ed Europe
Provides developers and IT professionals the most comprehensive technical education across Microsoft’s current and soon-to-release suite of products, solutions and services. This event provides hands-on learning, deep product exploration and opportunities to connect with industry and Microsoft experts one-to-one. If you are developing, deploying, managing, securing and mobilising Microsoft solutions, Tech·Ed Europe is the conference that will help you solve today’s real-world IT challenges and prepare for tomorrow’s innovations.”

This week I attended a wide variety of sessions coving topics as diverse as using hacker techniques to aid in IT administration to troubleshooting Windows using SysInternals tools and from managing and monitoring UNIX and Linux systems using System Center Operations Manager to looking at why the various architectures for desktop delivery don’t matter so much as the way in which they are managed. Meanwhile, colleagues focused on a variety of messaging and collaboration topics, or on directory services. I’m pleased to say that I learned a lot this week. So much indeed that, by Friday lunchtime I was struggling to take any more in – thankfully one of the benefits of attending the event is a year’s subscription to TechNet Online, giving me access to recorded versions of the sessions.

When I first attended Tech·Ed, back in 1998, my career was only just getting going. These days, I have 15 years industry experience and I now know many of the event organisers, Microsoft staff, and speakers – and one of the reasons is the tremendous networking opportunity that events like this afford. I didn’t spend much time around the trade stands but I did make sure I introduced myself to key speakers whose subject material crosses my own expertise. I also met up with a whole load of people from the community and was able to associate many faces with names – people like Sander Berkouwer and Tamás Lepenye (who I knew from our online interactions but had not previously had the chance to meet in person) as well as Steven Bink (who I first met a couple of years ago, but it’s always good to see him around). But, by far the most fortuitous interaction for me was meeting Microsoft Technical Fellow Mark Russinovich on Friday morning. I was walking into the conference centre and realised that Active Directory expert John Craddock (whom I had shared a taxi with on the way from the airport earlier in the week) was next to me – and then I saw he was with Mark, who is probably the best known Windows operating system internals expert (with the possible exception of Dave Cutler) and I took the opportunity to introduce myself. Mark won’t have a clue who I am (apart from the hopeless groupie who asked him to pose for a picture later in the day) but, nevertheless, I was able to introduce myself. Mark and Mark Russinovich - yes, he really is that tall!Then, there was the Springboard Community Partei – a real opportunity to meet with international speakers and authors like Mark Minasi, as well as key Microsoft staff like Stephen Rose (Microsoft Springboard), Ward Ralston (Windows Server 2008 R2 Group Product Manager) and Mark Russinovich (although I didn’t actually see him at the party, this video shows he was there) – as well as MVPs like Sander Berkouwer, Aidan Finn and Thomas Lee. These are the events that lead to lasting relationships – and those relationships provide real value in the IT world. Name dropping in a blog post is one thing – but the IT world in which we live is a small place – Aidan is writing a book with Mark Minasi and you never know what opportunities may arise in future.

So, back to the point – Tech·Ed is one of my favourite IT events and I would love to attend it more frequently. At the stage my career has reached I no longer need week-long training courses on technical products, but 75 minute sessions to give an overview of a specific topic area are perfect – and, at around £2000 for a week of technical education and networking opportunity, Tech·Ed is something I’d like to persuade my employer to invest in more frequently…

…I’ll have to wait and see on that, but Tech·Ed 2010 will be held in Berlin again next November – fingers crossed I’ll be one of the attendees.

A quick look at Microsoft Surface

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back I managed to get a close look at a Microsoft Surface table. Although Surface has been around for a while now, it was the first time I’d been “hands on” with one and, considering it’s really a bunch of cameras, and a PC running Windows Vista in a cabinet a bit like a 1980s Space Invaders game, it was actually pretty cool.

One thing I hadn’t appreciated previously is that Surface uses a totally different technology to a multitouch monitor: rather than relying on capacitance, the surface table is sensitive to anything that reflects or absorbs infra red light. It uses an infrared emitter and a series of cameras to detect light reflected by something on the surface, then processes the image and detects shapes. There’s also an API so that software can decide what to do with the resulting image and a DLP projector to project the user interface on the glass (with an infrared filter so as not to confuse the input system). At the moment, the Surface display is only 1024×768 pixels but that didn’t seem to be restrictive in any way – even with such a physically large display.

Although in some ways surface behaves like a touch device as it has multiple cameras so it can perform stereoscopic three dimensional gestures but, because it lacks direct touch capabilities, there is no concept of a hover/mouse-over. Indeed the surface team’s API was taken and extended in the Microsoft .NET Framework version 4 to work with Window Touch and, at some point in the future, the Surface and Windows Touch APIs will converge.

The surface technology is unable to accommodate pressure sensitivity directly but the underlying processor is just a PC and has USB ports so peripherals could be used to extend the available applications (e.g. a fingerprint reader, card reader, etc.)

Surface can also recognise the type of object on the glass (e.g. finger, blob, byte tag) and it returns an identifier along with X and Y co-ordinates and orientation. When I placed my hand on the device, it was recognised as five fingers and a blob. Similarly, objects can be given a tag (with a value), allowing for object interaction with the table. Surface is also Bluetooth and Wi-Fi enabled so it’s possible to place a device on the surface and communicate with it, for example copying photos from the surface to a phone, or exchanging assets between two phones via the software running on the table. Finally, because Surface understands the concepts of flick and inertia, it’s possible to write applications that make use of this (such as the demonstration application that allows a globe to be spun on the surface display, creating a rippled water effect that it feels like you are interacting with, simulating gravity, adding sprung connections between items on the display, or making them appear to be magnetic.

One technology that takes this interaction even further (sometimes mistakenly referred to as Surface v2) is Microsoft’s SecondLight, which uses another set of technologies to differentiate between the polarisation properties of light so images may be layered in three dimensions. That has the potential to extend the possibilities of a Surface-like device even further and offer very rich interaction between devices on the Surface.

At present, Surface is only available for commercial use, with a development SKU offering a 5-seat license for the SDK and the commercial unit priced at £8,500. I’m told that, if a developer can write Windows Presentation Foundation (WPF) they can write Surface applications and, because Surface runs WPF or XNA, just as an Xbox or a PC does, it does have the potential for games development.

With touch now a part of the operating system in Windows 7, we should begin to see increasing use of touch technologies although there is a key difference between surface and Windows Touch as the vertically mounted or table form factor affects the user interface and device interaction – for example, Surface also detects the direction from which it is being touched and shows the user interface in the correct orientation. In addition, Surface needs to be able to cope with interaction from multiple users with multiple focus points (imagine having multiple mice on a traditional PC!).

My hour with Surface was inspiring. The key takeaways were that this is a multi-touch, multi-user, multi-directional device with advanced object interaction capabilities. Where it has been used in a commercial context (e.g. AT&T stores) it has mostly been a novelty; however there can be business benefits too. In short, before deploying Surface, it’s important to look further than just the hardware costs and the software development costs, considering broader benefits such as brand awareness, increased footfall, etc. Furthermore, because Surface runs Windows, some of the existing assets from another application (e.g. a kiosk) should be fairly simple to port to a new user interface.

I get the feeling that touch is really starting to go somewhere and is about to break out of its niche, finding mainstream computing uses and opening up new possibilities for device interaction. Surface was a research project that caught Bill Gates’ attention; however there are other touch technologies that will build on this and take it forward. With Windows Touch built into the operating system and exciting new developments such as SecondLight, this could be an interesting space to watch over the next couple of years.

Using a Windows System Image backup to transfer a configuration between computers

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my colleagues left our organisation a couple of weeks ago and his notebook PC was up for grabs (kind of like vultures looking for prey, my manager and I were trying to grab the best bits of his relinquished IT assets…). To be honest, the PC is only marginally better than the one I had already but it did have a slightly faster processor (Intel Core 2 Duo Mobile P8400 vs. T7500), a larger hard disk, and was in better physical condition (I’ll try not to drop this one!). I did need to transfer my configuration to the “new” machine quickly though (i.e. between the start and the end of our team meeting today!) so that my “old” machine could be reallocated to someone in need of a more modern PC.

I could have messed around with user state migration onto a fresh build; however I’m flying out to TechEd Europe at the weekend and I wanted to be sure that I had all my applications working so I tried a different approach. The two computers are similar, but not identical (both Fujitsu-Siemens Lifebooks – one is an S7210 and the other is an S7220) so I decided to try creating a Windows System Image and restoring it onto a different machine, then letting Plug and Play sort out the hardware. It’s a bit messy (with new network adapters etc.) but the theory was sound.

Plug and Play driver detection on Windows 7Not only was the theory sound, but it worked. After booting the “new” machine from the Windows 7 Repair Disc that I was prompted to create at the end of the backup, I restored my system, complete with all applications and data. Plug and Play did indeed identify all of my hardware, combined with Microsoft Update for a missing display driver (that would have worked too if I had been online at the time). Windows even managed to reactivate itself as the product key was still valid so my system is reporting itself as genuine (note that Windows licences remain with individual computers; however in this case both machines were licensed for Windows 7 using a volume license product key).

It’s important to note that this effectively cloned the machine (yes, I could have used any number of disk imaging products for this, but I was using the out-of-the-box tools) and so I was careful not to have both machines on the network at the same time. Indeed the last step (before passing the “old” machine on to my manager) was to securely erase my data partition, which I did using the cipher command, before booting into the Windows Recovery Environment one more time to run up diskpart and remove all of the disk partitions.

The only remaining hurdle is moving the (so far empty) BitLocker Drive Encryption Partition from its current location in the middle of my hard disk (which was the end of the smaller disk in my old machine) but that should be possible as I haven’t actually encrypted the drive on this PC.

Not bad for a few hours work, especially as there was no downtime involved (I was able to use the “old” machine to deliver my presentation whilst the “new” one was being prepared).