Backing up and restoring Adobe Lightroom 2.x on a Mac

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few days, I’ve been rebuilding the MacBook that I use for all my digital photography (which is a pretty risky thing to do immediately before heading off on a photography workshop) and one of the things I was pretty concerned about was backing up and restoring my Adobe Lightroom settings as these are at the heart of my workflow.

I store my images in two places (Lightroom backs them up to one of my Netgear ReadyNAS devices on import) and, on this occasion I’d also made two extra backups (I really should organise one in the cloud too, although syncing 130GB of images could take some time…).

I also backup the Lightroom catalog each time Lightroom runs (unfortunately the only option is to do this at startup, not shutdown), so that handles all of my keywords, develop settings, etc. What I needed to know was how to backup my preferences and presets – and how to restore everything.

It’s actually quite straightforward – this is how it worked for me – of course, I take no responsibility for anyone else’s backups and, as they say, your mileage may vary.  Also, PC users will find the process similar, but the file locations change:

I also made sure that the backups and restores were done at the same release (v2.3) but, once I was sure everything was working, I updated to the latest version (v2.6).

Checking if a computer supports Intel vPro/Active Management Technology (AMT)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my many activities over the last few days has been taking a look at whether my work notebook PC supports the Intel vPro/Active Management Technology (AMT) functionality (it doesn’t seem to).

Intel vPro/AMT adds out of band management capabilities to PC hardware, integrated into the CPU, chipset and network card (this animation shows more details) and is also a pre-requisite for Citrix XenClient which, at least until Microsoft gets itself in order with a decent client-side virtualisation solution, I was hoping to use as a solution for running multiple desktops on a single PC.  Sadly I don’t seem to have the necessary hardware.

Anyway, thanks to a very useful forum post by Amit Kulkarni, I found that there is a tool to check for the presence of AMT – in the AMT software development kit (SDK) is a discovery tool (discovery.exe), which can be used to scan the network for AMT devices.

Unfortunately, vPro/AMT only seems to be in the high-spec models for most OEMs right now… until then I’m stuck with hosted virtualisation solutions.

Removing crapware from my Mac

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of days, I’ve been rebuilding my MacBook after an increasing number of “spinning beachballs of death” (the Mac equivalent of a Windows hourglass/doughnut/halo…).  Unfortunately, its not just PCs that come supplied with “crapware” – it may only be a couple of items but my OS X 10.5 installation also includes the Office for Mac 2004 Test Drive and iWork ’08 Trial.  As it happens, I do have a copy of Office for Mac 2008 but I don’t need it on this PC – indeed the whole reason for wiping clean and starting again was to have a lean, clean system for my photography, with the minimum of unnecessary clutter.

“What’s the problem?”, I hear you say, “isn’t uninstalling an application on a Mac as simple as dragging it to the trash?”  Well, in a word: no. Some apps for OS X are that simple to remove but many leave behind application support and preference files.  Some OS X apps have installers, just as on Windows PCs.

I ran the Remove Office application to remove the Office for Mac Test Drive and, after searching for installed copies of Office, it decided there were none, leaving Remove Office log.txt file on the desktop with the details of its search:

***************************
Found these items:
OFC2004_TD_FOLDERS: /Applications/Office 2004 for Mac Test Drive

It seems that, if you’ve not attempted to run any of the Test Drive apps (e.g. by opening an Office document), they are not actually installed.  Diane Ross has more details on her blog post on the subject but, basically, it’s safe to drag the Test Drive files and folders to the trash.

With Office for Mac out of the way, I turned my attention to the iWork ’08 Trial.  This does not have an uninstaller – the application files and folders for Keynote, Numbers and Pages can be dragged to the trash but there is another consideration – there are some iWork ’08 application support files in /Library/Application Support/ that may be removed too.

These resources might not be taking much space on my disk, but I don’t like the idea of remnants of an application hanging around – a clean system is a reliable system.  At least, that’s my experience on Windows and it shouldn’t be any different on a Mac.

Reading EXIF data to find out the number of shutter activations on a Nikon DSLR

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few years ago, I wrote about some digital photography utilities that I use on my Mac.  These days most of my post-processing is handled by Adobe Lightroom (which includes Adobe Camera Raw), with a bit of Photoshop CS4 (using plugins like Noise Ninja) for the high-end stuff but these tools still come in useful from time to time.  Unfortunately, Simple EXIF Viewer doesn’t work with Nikon raw images (.NEF files) and so it’s less useful to me than it once was.

Recently, I bought my wife a DSLR and, as I’m a Nikon user (I have a D700), it made sense that her body should fit my lenses so I picked up a Nikon refurbished D40 kit from London Camera Exchange.  Whilst the body looked new, I wanted to know how many times the shutter had been activated (DSLR shutter mechanisms have a limited life – about 50,000 for the D40) and the D40’s firmware won’t display this information – although it is captured in the EXIF data for each image.

After some googling, I found a link to Phil Harvey’s ExifTool, a platform independent library with a command line interface for accessing EXIF data in a variety of image formats. A few seconds later and I had run the exiftool -nikon dsc_0001.nef command (exiftool --? gives help) on a test image and it told me a perfectly respectable shutter count of 67.  For reference, I tried a similar command on some images from my late Father’s Canon EOS 1000D but shutter count was not one of the available metrics – even so the ExifTool provides a wealth of information from a variety of image formats.

Did you miss TechEd? Here come the UK TechDays

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

UK Tech Days is a week-long series of free events run by Microsoft and technical communities to celebrate and inspire developers, IT professionals and IT Managers to get more from Microsoft technology.  Over 5 days (12th to 16th April 2010), Microsoft is running 10 all-day events covering the latest technology releases with topics including Microsoft Visual Studio 2010, Office 2010, virtualisation, Silverlight, Windows 7 and Server 2008 R2, SQL Server 2008 R2, Windows deployment and an IT Manager day.  In addition to the main events, held in West London Cinema locations, various user groups will be organising fringe events (Mark Parris is working hard on something for the Windows Server User Group… more details to follow).

Full event details (and registration links) are available on the UK TechDays site but here’s a brief rundown of the main attractions.

Developer Days at Fulham Vue Cinema:

  • Monday, 12 April 2010: Microsoft Visual Studio 2010 launch - a path to big ideas. This launch event is aimed at development managers, heads of development and software architects who want to hear how Visual Studio 2010 can help build better applications whilst taking advantage of great integration with other key technologies.  (Day 2 will cover the technical in-depth sessions aimed at developers.)
  • Tuesday, 13 April 2010: Getting started with Microsoft .NET Framework 4 and Microsoft Visual Studio 2010. Microsoft and industry experts will share their perspectives on the top new and useful features with core programming languages and in the framework and tooling, such as — ASP.NET MVC, parallel programming, Entity Framework 4, and the offerings around rich client and web development experiences.
  • Wednesday, 14 April 2010: The essential MIX – exploring the art and science of creating great user experiences. Learn about the next generation ASP.NET and Silverlight platforms.
  • Thursday, 15 April 2010: Best of breed client applications on Microsoft Windows 7. Windows 7 adoption is moving at a startling pace. In this demo-driven day, Microsoft will look at the developer landscape around Windows 7 – the operating system for applications running on through the new decade.
  • Friday, 16 April 2010: Windows Phone day. A practical day of detailed Windows Phone 7 Series development sessions covering the new Windows Phone specification, application standards and services.

IT Professional and IT Manager Days at Shepherds Bush Vue Cinema:

  • Monday, 12 April 2010: Virtualisation summit – From the desktop to the datacentre. Designed to provide an understanding of the key products and technologies enabling seamless physical and virtual management, interoperable tools, cost-savings and value.
  • Tuesday, 13 April 2010: Office 2010 – Experience the next wave in business productivity. The event will cover how the improvements to Office, SharePoint, Exchange, Project and Visio will provide a practical platform that will allow IT professionals to not only solve problems and deliver business value, but also demonstrate this value to IT stakeholders.
  • Wednesday, 14 April 2010: Windows 7 and Windows Server 2008 R2 – deployment made easy. This event will provide an understanding of key tools including the new Microsoft Deployment Toolkit 2010, Windows Deployment services and the Application Compatibility Toolkit along with considerations for deploying Windows Server 2008 R2 and migrating server roles.
  • Thursday, 15 April 2010: SQL Server 2008 R2 – The information platform. Highlighting the new capabilities of the latest SQL Server release, as well as diving into specific topics, such as consolidating SQL Server databases, tips and techniques for performance monitoring and tuning as well, and a look at the newly released cloud platform (SQL Azure).
  • Friday, 16 April 2010 (IT Managers): Looking ahead, keeping the boss happy and raising the profile of IT.  IT Managers have more and more responsibilities to drive and support the direction of the business. Explore the various trends and technologies that can bring IT to the top table, from score-carding to data governance and cloud computing.

I’ve been waiting for this announcement for a few weeks now, and places are (very) limited, so,  if these topics are of interest to you, I suggest registering quickly!

A quick introduction to Dell PowerEdge server naming

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year I wrote a short blog post looking at HP ProLiant servers and how the model line-up looks.  I haven’t looked at IBM System x for a few years but last week I got the chance to sit down and have a look at the current Dell PowerEdge portfolio.

Just as for HP, there is some logic behind Dell’s server names, although this scheme is fairly new and some older servers (e.g. the PowerEdge 2950) do not fit this:

  • The first character is a letter indicating the chassis type: T for tower; R for rack; M for modular (blade).
  • The next digit indicates the market segment for which the server is destined: 1, 2 and 3 are single-socket servers for value, medium and high-end markets respectively; 4 and 5 are 2 socket value servers with 6 for medium and 7 for high-end; 8 indicates a special server (for example one which can be configured as a 2 or a 4-socket machine); 9 indicates a 4 socket server (Dell does not currently complete in the 8-way marketplace).
  • The next digit indicates the generation number (0 for 10th, 1 for 11th, 2 for 12th generation). With a new generation every couple of years or so, resetting the clock to zero should give Dell around 20 years before they need to revisit this decision!
  • Finally, Intel servers end with 0 whilst AMD servers end with 5.

There is another complication though – those massive cloud datacentres operated by Microsoft, Amazon, et al use custom servers – and some of them come from Dell.  In that scenario, the custom servers don’t need to be resilient (the cloud provides the resilience) but Dell has now brought similar servers to market for those who want specialist, high-volume servers, albeit with a slightly lower MTBF than standard PowerEdge boxes.  So, for example: the C1100 is a 2-way, 1U server that can take up to 18 DIMMs for memory-intensive applications; the C2100 is a 2-way, 2U server with room for 12 disks (and 18DIMMs); whilst the C6100 crams four 2-way blades into 2U enclosure, with room for 12 DIMMs and up to 24 2.5″ disks!

A new e-mail etiquette?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve just got off a late night train home from London where I spotted someone’s discarded Evening Standard, featuring an interesting article by Philip Delves Broughton, examining how the way in which we deal with e-mail reveals our professional characters. The full article makes for interesting reading but I thought I’d quote from the side-panel on the new e-mail etiquette here:

  • After the initial sales pitch, follow up by e-mail and phone.
  • Beyond that, pestering will make you seem needy.
  • Should you be looking for a job and get no response, reframe the pitch with something that will entice your potential employer – a fact about their competitor, an article of interest. Banging on about yourself is bad form.
  • A voicemail not answered is better than an e-mail ignored.
  • If you are swamped with e-mails, and don’t want to appear rude, consider an e-mail template that says no nicely.
  • However, do not resort to the standard, unhelpful Out Of Office Reply. It effectively says, I’m not here so your problems can go to hell.

Writing SOLID code

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not a coder: the last time I wrote any functioning code was at Uni’ in the early 1990s.  I can adapt other people’s scripts and, given enough time, write my own. I can knock up a bit of XHTML and CSS but writing real applications?  Nope.  Not my game.

Every now and again though, I come up against a development topic and I do find them interesting, if a little baffling.  I guess that’s how devs feel when I talk infrastructure.

From 2004 to 2005, I worked for a company called Conchango (who are now part of EMC Consulting) – I had a great time there, but the company’s focus had shifted from infrastructure to web design agency and Java/.NET development (which, by the way, they were rather good at – with an impressive client list).  Whilst I was there, it seemed that all I heard about was “agile” or “XP” (extreme programming… nothing to do with Windows XP) and these were approaches that were taking the programming world by storm at the time.

Then, a few weeks ago, I had a similar experience at an Edge User Group meeting, where Ian Cooper (a C# MVP) was talking about the SOLID principles.  Not being a coder, most of this went right over my head (I’m sure it would make perfect sense to my old mates from Uni’ who did follow a programming route, like Steve Knight), but it was interesting nonetheless – and, in common with some of the stuff I heard about in my days at Conchango, I’m sure the basic principles of what can go wrong with software projects could be applied to my infrastructure projects (with a little tweaking perhaps):

  • Rigidity – difficult to change.
  • Fragility – change one thing, breaks something else.
  • Immobility – e.g. one part of the solution is welded into the application.
  • Viscosity – wading through treacle, maintaining someone else’s software.
  • Needless complexity – why did we do it that way?
  • Needless repetition – Ctrl+C Ctrl+V is not an ideal programming paradigm!
  • Opacity – it made sense to original developer… but not now!

Because of these issues, maintenance quickly becomes an issue in software development and Robert C Martin (@unclebobmartin – who had previously led the group that created Agile software development from Extreme programming techniques) codified the SOLID principles in his Agile Principles, Patterns and Practices in C# book (there is also a Hanselminutes podcast where Scott Hanselman and “Uncle Bob” Martin discuss the SOLID principles and a follow-up where they discuss the relevance of SOLID).  These principles are:

  • Single responsibility principle
  • Open/closed principle
  • Liskov substitution principle
  • Interface segregation principle
  • Dependency inversion principle

This is the point where my brain starts to hurt, but please bear with me as I attempt to explain the rest of the contents of Ian’s presentation (or listen to the Hanselminutes podcast)!

The single responsibility principle

This principle states that a class should have only one reason to change.

Each responsibility is an axis of change and, when the requirements change, that change will be manifested through a change in responsibility among the classes. If a class assumes more than one responsibility, that class will have more than one reason to change, hence single responsibility.

Applying this principle gives a developer a single concept to code for (also known as separation of concerns) so, for example, instead of having a GUI to display a purchase order, this may be separated into GUI, controller, and purchase order: the controller’s function is to get the data from the appropriate place, the GUI is only concerned with displaying that data, and the purchase order is not concerned with how it is displayed.

The open/closed principle

This principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.

The thinking here is that, when a single change to a program results in a cascade of changes to dependent modules, the design becomes rigid but, if the open/closed principle is applied well, further changes are achieved by adding new code, not by changing old code that already works.

Some may think that it’s impossible to be both open to extension and closed to change: the key here is abstraction and composing.

For example, a financial model may have different rounding rules for different markets.  This can be implemented with local rounding rules rather than changing the model each time the model is applied to a different market.

The Liskov substitution principle

This principle (attributed to Barbara Liskov) states that subtypes most be substitutable for their base types.  Unfortunately, attempts to fix Liskov substitution problems often result in violations of the open/closed principle but, in essence, the validity of a model can be expressed only in terms of its clients so, for example, if there is a type called Bird (which has got wings and can fly), where what happens to penguin and emu when an attempt is made to implement the fly method? We need to be able to call fly for a penguin and handle it appropriately so there are effectively two solutions: change the type hierarchy; or refactor the type to express it differently – fly may become move, or we could have a flightless bird type and a running bird type.

The interface segregation principle

The interface segregation principle says that clients should not be forced to depend on methods they do not use.

Effectively, this means that clients should not be affected by changes that don’t concern them (i.e. fat types couple disparate parts of the application).  In essence, each interface should have smallest set of features that meet client requirements but this means it may be necessary to create multiple interfaces within a class.

The dependency inversion principle

The dependency inversion principle states that high level models should not depend on low level models – both should depend on abstractions. In addition, abstractions should not depend on details. Details should depend upon abstractions.  This is sometimes known as the Hollywood principle (“Don’t call us, we’ll call you”).  So, where is the inversion?  If a class structure is considered as a tree with the classes at the leaves and abstraction at the trunk, we depend on the tree, not the leaves, effectively inverting the tree and grasping by the roots (inversion of control).

Summing it up

I hope I’ve understood Ian’s presentation enough to do it justice here but to sum it up: the SOLID principles help to match computer science concepts such as cohesion and polymorphism to actual development in practice. Or, for dummies like me, if you write your code according to these principles, it can avoid problems later when the inevitable changes need to be made.

As an infrastructure guy, I don’t fully understand the details, but I can grasp the high level concepts and see that it’s not really so different to my world.  Look at a desktop delivery architecture for a large enterprise:

  • We abstract data from user objects, applications, operating system and hardware (which is possibly virtualised) and this gives us flexibility to extend (or substitute) parts of the infrastructure (well, that’s the idea anyway).
  • We only include those components that we actually want to use (no games, or other consumer functionality). 
  • We construct servers with redundancy then build layers of software to construct service platforms (which is a kind of dependency inversion). 

OK, so the infrastructure links may seem tenuous, but the principle is sound.  Sometimes it’s good for us infrastructure guys to take a look at how developers work…

Safer Internet Day: Educating parents on Internet safety for their children

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago, I mentioned that today is European Safer Internet Day and, here in the UK a number of organisations are working with the Child Exploitation and Online Protection centre (CEOP) to educate parents and children in safe use of the Internet.  I don’t work for Microsoft but, as an MVP, I was invited to join in and tonight I’ll be delivering a session to parents at my son’s school, using Microsoft’s presentation deck (although it has to be said that this is not a marketing deck – it’s full of real-world examples and practical advice about protecting children and young people from the specific dangers the Internet can pose, whilst allowing them to make full use of the ‘net’s many benefits: turning it off is not the answer).

The BBC’s Rory Cellan-Jones has reported some of the activities for Safer Internet Day; although the Open Rights Group’s suggestion that this is all about scoring a publicity hit for a little cost are a little cynical – Microsoft has a social responsibility role to play and by working with CEOP to produce an IE 8 browser add-in the UK subsidiary’s activities are laudable.  If other browser-makers want to follow suit – then they can also work with CEOP (ditto for the social networking sites that have yet to incorporate the Report Abuse button).  Indeed, quoting from James O’Neill’s post this morning:

“We are part of the UK Council for Child Internet Safety (UKCCIS) and Gordon [Frazer – Microsoft UK MD and VP Microsoft International]’s mail also said ‘This year as part of the ‘Click Clever Click Safe’ campaign UKCCIS will be launching a new digital safety code for children – ‘Zip It, Block It, Flag It’. Over 100 Microsoft volunteers will be out in schools in the UK teaching young people and parents alike about child online safety and helping build public awareness for simple safety tips.

Our volunteering activities today mark our strong commitment to child online safety. Online safety is not only core to our business, as exemplified by particular features in Internet Explorer 8 (IE8) and our work in developing the Microsoft Child Exploitation Tracking System (CETS) which helps law enforcement officials collaborate and share information with other police services to manage child protection cases, but it is also an issue that our employees, many parents themselves, take very seriously. As a company we put a great deal of faith in our technology, however, we are also aware that the tools we provide have to be used responsibly.”

Anyway, I digress – part of the presentation I’ll be giving this evening will include a fact sheet, produced by Microsoft, that I’ll leave with parents and I’d like to repeat some of the advice it contains here (with a few edits of my own…).

Safety Considerations

The Internet is a fantastic resource for young people but we must remember that the same as in the real world, there can be potential dangers to consider:

  • Control – Personal information can be easily accessed if it is posted online. Consider what information about your child someone could access online.
  • Contact – Paedophiles use the Internet to meet young people and build up a relationship.  This is often done in a public environment such as a chat room or online game before trust is built up to become an online friend for 1-1 conversations.
  • Cyberbulling – Other people may make use of technology to bully a young person 24/7.  By using online technology a bully can gain an instant and wide audience for their bullying. Cyberbullying can be threats and intimidation as well as harassment and peer rejection.
  • Content – The Internet can contain inappropriate images of violence and pornography that you might be unhappy for your child to have access to.

Top Tips for Parents

These simple rules can help to keep children safe:

  • Keep your PC in an open space where possible to encourage communication.
  • Discuss the programs your children use.
  • Keep communication open with regards to who they are chatting to online.
  • Discuss their list of contacts and check they know all those they have accepted as friends.
  • Consider using the same technology so you can understand how it works.
  • Talk to your children about keeping their information and photos private using privacy settings on sites such as Bebo and Facebook.
  • Teach your children what personal information is and that they shouldn’t share it online with people they don’t know.
  • Make use of Parental Controls where available. These can allow you to control the amount of time your children are online, the sites they can access and the people they can talk to.   Controls are available for many products including Windows (Vista and 7), Mac OS X, Xbox and Windows Live (Family Safety), or more technical users might consider using an alternative DNS provider such as OpenDNS.

Some useful links include:

How to Get Help

For Young People:

For Adults:

  • Adults can speak to The Samaritans. The Samaritans provide confidential emotional support for people who are in emotional distress. If you are worried, feel upset or confused and just want to talk you can email the Samaritans or phone 08457 90 90 90.

I forgot that presenting at a school where I have an association means that some of the people in the audience are my friends (blurring my personal/professional boundary…) but hey, there are some important messages at stake here.  If all goes well tonight, I’ll be contacting other schools in the area to do something similar.

[Updated 24 November 2014: CBBC Stay Safe link updated; Metropolitan Police link added]

Installing Windows from a network server without Windows Deployment Services

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’d like to start this post with a statement:

Windows Deployment Services (WDS) is a useful role in Windows Server 2008 R2.  It’s free (to licensed Windows users), supports multitasking, and is a perfectly good method of pushing Windows images to clients…

Unfortunately that statement has a caveat:

… but it needs to be installed on an Active Directory-member computer.

For some, that’s a non-starter.  And sometimes, you just want a quick and dirty solution.

I have a small dedicated server at home to run Active Directory along with basic network services (DNS, DHCP, etc.) for my home IT.  I also run Philippe Jounin’s excellent TFTP Daemon (service edition) on it in order to support image loads on my Cisco 7940 IP Phone.

In order to rebuild the Hyper-V server that I use for infrastructure test and development, I wanted to boot across the network and install Windows Server 2008 R2 – and a few days ago I found Mark Kubacki’s post about TFTPd32 and DHCP Server – Windows Deployment Services without WDS. Perfect!  No need to install another role on my little Atom-powered server – particularly as, once this server is built, I’ll probably install WDS on it  to deploy images to my various test virtual machines!

So, this is the process – with thanks to Mark Kubacki, and to Ryan T Adams (who wrote about installing Vista without a CD Drive using TFTP – for instance, installing Windows on a netbook) who were gracious enough to blog about their experiences and give me something to build upon:

  1. Download tftpboot.exe from Ryan T Adams’ site and run it to extract the contents to a suitable hard drive location (i.e. the TFTP root folder).  Unfortunately, you probably won’t need most of this 154MB download (more on that in a moment) but it will get you started.
  2. Start tftpd32.exe (or copy the files to your TFTP root, if you are already running a TFTP service, as I was) and add tftpd32.exe (or tftpd32_svc.exe) as a Windows Firewall exception (you could just disable the firewall but I don’t recommend that approach).
  3. Either set TFTPD32 to act as a DHCP server and specify the boot file options (as Ryan describes), or configure DHCP options 066 and 067 (boot server host name and boot file name) on another DHCP server (Mark shows how to do this for the Windows DHCP Server role) using the IP address of the TFTP server and the boot file name of boot\pxeboot.com.
  4. Make sure that the TFTP Server is set to include PXE capability in the advanced TFTP options and that it’s DHCP Server capability is turned off if you are using another DHCP server.
  5. Restart the TFTP Server (or service) to pick up the configuration changes.
  6. Boot a computer (or virtual machine) from its network card, press F12 when prompted and wait for Windows PE to load, then map a drive to another machine on the network which is sharing the Windows media (I use Slysoft Virtual Clone Drive to mount an operating system ISO file and I’ve shared the virtual drive).
  7. Switch to the newly mapped drive and type setup.exe to run Windows Setup.

Unfortunately, the version of the Windows Preinstallation Environment (Windows PE) that Ryan has supplied in tftpboot.exe is a 32-bit version (of Windows PE 2.0, I think).  When I tried to use this to install Windows Server 2008 R2 (which is 64-bit only), I was greeted with the following message:

This version of Z:\setup.exe is not compatible with the version of Windows you’re running.  Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

I needed a 64-bit version of Windows PE.  No problem.  That’s included in the Windows Automated Installation Kit (WAIK), so I overwrote Ryan’s winpe.wim with the one from %programfiles%\Windows AIK\Tools\PETools\amd64, and restarted the computer I wanted to build.  This time Windows Setup ran with no issues and Windows Server was installed successfully.

Even though I used the TFTPD32, this method could be used to install Windows from just about any TFTP server (it could even be running on totally different operating system, I guess), or even to load another WIM file (i.e. not Windows PE) from a network boot. I’m sure if I had more time I could come up with all sorts of scenarios (boot Windows directly from the network?) but, for now, I’ll stick to using this method as a WDS replacement.