Useful Links February 2010

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Cleaning my DSLR’s sensor… the quick (and inexpensive) way

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Right now, I’m attending photography workshop in North Wales, learning a bit more about digital photographic imaging. It’s been a good experience so far but, yesterday afternoon, I experienced a small disaster as not only dust but a tiny hair had appeared on all of the images I took, indicating that I had some sort of debris on my sensor (actually, it’s on the anti-aliasing filter, not the sensor but that’s being pedantic…).

Being in the middle of the Snowdonia National Park (albeit in overcast/wet weather) and on a course where I would take a lot of photos, this was not exactly welcome and I feared I’d need a costly professional sensor clean (after a weekend of creating images with hair on them). No-one in the class had any sensor cleaning swabs (not that I’ve ever used them, and I would have been a little nervous too on my still-in-warranty Nikon D700) but, luckily, one of the guys passed me an air blower and said “try this – but make sure you hold the camera body face down as you use it!”.

With the mirror locked up, I puffed some air around inside the body (it’s important not to use compressed air for this) and took a reference image – thankfully the debris was gone (and, because the front of the camera was facing down, it should have fallen out, not gone further back into the camera).

I breathed a big sigh of relief and thanked my fellow classmate. In just over a week its the Focus on Imaging exhibition – hopefully I’ll get along to it and one of the items on my shopping list will be a Giottos Rocket Air Blower

Backing up and restoring Adobe Lightroom 2.x on a Mac

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few days, I’ve been rebuilding the MacBook that I use for all my digital photography (which is a pretty risky thing to do immediately before heading off on a photography workshop) and one of the things I was pretty concerned about was backing up and restoring my Adobe Lightroom settings as these are at the heart of my workflow.

I store my images in two places (Lightroom backs them up to one of my Netgear ReadyNAS devices on import) and, on this occasion I’d also made two extra backups (I really should organise one in the cloud too, although syncing 130GB of images could take some time…).

I also backup the Lightroom catalog each time Lightroom runs (unfortunately the only option is to do this at startup, not shutdown), so that handles all of my keywords, develop settings, etc. What I needed to know was how to backup my preferences and presets – and how to restore everything.

It’s actually quite straightforward – this is how it worked for me – of course, I take no responsibility for anyone else’s backups and, as they say, your mileage may vary.  Also, PC users will find the process similar, but the file locations change:

I also made sure that the backups and restores were done at the same release (v2.3) but, once I was sure everything was working, I updated to the latest version (v2.6).

Checking if a computer supports Intel vPro/Active Management Technology (AMT)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my many activities over the last few days has been taking a look at whether my work notebook PC supports the Intel vPro/Active Management Technology (AMT) functionality (it doesn’t seem to).

Intel vPro/AMT adds out of band management capabilities to PC hardware, integrated into the CPU, chipset and network card (this animation shows more details) and is also a pre-requisite for Citrix XenClient which, at least until Microsoft gets itself in order with a decent client-side virtualisation solution, I was hoping to use as a solution for running multiple desktops on a single PC.  Sadly I don’t seem to have the necessary hardware.

Anyway, thanks to a very useful forum post by Amit Kulkarni, I found that there is a tool to check for the presence of AMT – in the AMT software development kit (SDK) is a discovery tool (discovery.exe), which can be used to scan the network for AMT devices.

Unfortunately, vPro/AMT only seems to be in the high-spec models for most OEMs right now… until then I’m stuck with hosted virtualisation solutions.

Removing crapware from my Mac

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of days, I’ve been rebuilding my MacBook after an increasing number of “spinning beachballs of death” (the Mac equivalent of a Windows hourglass/doughnut/halo…).  Unfortunately, its not just PCs that come supplied with “crapware” – it may only be a couple of items but my OS X 10.5 installation also includes the Office for Mac 2004 Test Drive and iWork ’08 Trial.  As it happens, I do have a copy of Office for Mac 2008 but I don’t need it on this PC – indeed the whole reason for wiping clean and starting again was to have a lean, clean system for my photography, with the minimum of unnecessary clutter.

“What’s the problem?”, I hear you say, “isn’t uninstalling an application on a Mac as simple as dragging it to the trash?”  Well, in a word: no. Some apps for OS X are that simple to remove but many leave behind application support and preference files.  Some OS X apps have installers, just as on Windows PCs.

I ran the Remove Office application to remove the Office for Mac Test Drive and, after searching for installed copies of Office, it decided there were none, leaving Remove Office log.txt file on the desktop with the details of its search:

***************************
Found these items:
OFC2004_TD_FOLDERS: /Applications/Office 2004 for Mac Test Drive

It seems that, if you’ve not attempted to run any of the Test Drive apps (e.g. by opening an Office document), they are not actually installed.  Diane Ross has more details on her blog post on the subject but, basically, it’s safe to drag the Test Drive files and folders to the trash.

With Office for Mac out of the way, I turned my attention to the iWork ’08 Trial.  This does not have an uninstaller – the application files and folders for Keynote, Numbers and Pages can be dragged to the trash but there is another consideration – there are some iWork ’08 application support files in /Library/Application Support/ that may be removed too.

These resources might not be taking much space on my disk, but I don’t like the idea of remnants of an application hanging around – a clean system is a reliable system.  At least, that’s my experience on Windows and it shouldn’t be any different on a Mac.

Reading EXIF data to find out the number of shutter activations on a Nikon DSLR

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few years ago, I wrote about some digital photography utilities that I use on my Mac.  These days most of my post-processing is handled by Adobe Lightroom (which includes Adobe Camera Raw), with a bit of Photoshop CS4 (using plugins like Noise Ninja) for the high-end stuff but these tools still come in useful from time to time.  Unfortunately, Simple EXIF Viewer doesn’t work with Nikon raw images (.NEF files) and so it’s less useful to me than it once was.

Recently, I bought my wife a DSLR and, as I’m a Nikon user (I have a D700), it made sense that her body should fit my lenses so I picked up a Nikon refurbished D40 kit from London Camera Exchange.  Whilst the body looked new, I wanted to know how many times the shutter had been activated (DSLR shutter mechanisms have a limited life – about 50,000 for the D40) and the D40’s firmware won’t display this information – although it is captured in the EXIF data for each image.

After some googling, I found a link to Phil Harvey’s ExifTool, a platform independent library with a command line interface for accessing EXIF data in a variety of image formats. A few seconds later and I had run the exiftool -nikon dsc_0001.nef command (exiftool --? gives help) on a test image and it told me a perfectly respectable shutter count of 67.  For reference, I tried a similar command on some images from my late Father’s Canon EOS 1000D but shutter count was not one of the available metrics – even so the ExifTool provides a wealth of information from a variety of image formats.

Did you miss TechEd? Here come the UK TechDays

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

UK Tech Days is a week-long series of free events run by Microsoft and technical communities to celebrate and inspire developers, IT professionals and IT Managers to get more from Microsoft technology.  Over 5 days (12th to 16th April 2010), Microsoft is running 10 all-day events covering the latest technology releases with topics including Microsoft Visual Studio 2010, Office 2010, virtualisation, Silverlight, Windows 7 and Server 2008 R2, SQL Server 2008 R2, Windows deployment and an IT Manager day.  In addition to the main events, held in West London Cinema locations, various user groups will be organising fringe events (Mark Parris is working hard on something for the Windows Server User Group… more details to follow).

Full event details (and registration links) are available on the UK TechDays site but here’s a brief rundown of the main attractions.

Developer Days at Fulham Vue Cinema:

  • Monday, 12 April 2010: Microsoft Visual Studio 2010 launch - a path to big ideas. This launch event is aimed at development managers, heads of development and software architects who want to hear how Visual Studio 2010 can help build better applications whilst taking advantage of great integration with other key technologies.  (Day 2 will cover the technical in-depth sessions aimed at developers.)
  • Tuesday, 13 April 2010: Getting started with Microsoft .NET Framework 4 and Microsoft Visual Studio 2010. Microsoft and industry experts will share their perspectives on the top new and useful features with core programming languages and in the framework and tooling, such as — ASP.NET MVC, parallel programming, Entity Framework 4, and the offerings around rich client and web development experiences.
  • Wednesday, 14 April 2010: The essential MIX – exploring the art and science of creating great user experiences. Learn about the next generation ASP.NET and Silverlight platforms.
  • Thursday, 15 April 2010: Best of breed client applications on Microsoft Windows 7. Windows 7 adoption is moving at a startling pace. In this demo-driven day, Microsoft will look at the developer landscape around Windows 7 – the operating system for applications running on through the new decade.
  • Friday, 16 April 2010: Windows Phone day. A practical day of detailed Windows Phone 7 Series development sessions covering the new Windows Phone specification, application standards and services.

IT Professional and IT Manager Days at Shepherds Bush Vue Cinema:

  • Monday, 12 April 2010: Virtualisation summit – From the desktop to the datacentre. Designed to provide an understanding of the key products and technologies enabling seamless physical and virtual management, interoperable tools, cost-savings and value.
  • Tuesday, 13 April 2010: Office 2010 – Experience the next wave in business productivity. The event will cover how the improvements to Office, SharePoint, Exchange, Project and Visio will provide a practical platform that will allow IT professionals to not only solve problems and deliver business value, but also demonstrate this value to IT stakeholders.
  • Wednesday, 14 April 2010: Windows 7 and Windows Server 2008 R2 – deployment made easy. This event will provide an understanding of key tools including the new Microsoft Deployment Toolkit 2010, Windows Deployment services and the Application Compatibility Toolkit along with considerations for deploying Windows Server 2008 R2 and migrating server roles.
  • Thursday, 15 April 2010: SQL Server 2008 R2 – The information platform. Highlighting the new capabilities of the latest SQL Server release, as well as diving into specific topics, such as consolidating SQL Server databases, tips and techniques for performance monitoring and tuning as well, and a look at the newly released cloud platform (SQL Azure).
  • Friday, 16 April 2010 (IT Managers): Looking ahead, keeping the boss happy and raising the profile of IT.  IT Managers have more and more responsibilities to drive and support the direction of the business. Explore the various trends and technologies that can bring IT to the top table, from score-carding to data governance and cloud computing.

I’ve been waiting for this announcement for a few weeks now, and places are (very) limited, so,  if these topics are of interest to you, I suggest registering quickly!

A quick introduction to Dell PowerEdge server naming

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year I wrote a short blog post looking at HP ProLiant servers and how the model line-up looks.  I haven’t looked at IBM System x for a few years but last week I got the chance to sit down and have a look at the current Dell PowerEdge portfolio.

Just as for HP, there is some logic behind Dell’s server names, although this scheme is fairly new and some older servers (e.g. the PowerEdge 2950) do not fit this:

  • The first character is a letter indicating the chassis type: T for tower; R for rack; M for modular (blade).
  • The next digit indicates the market segment for which the server is destined: 1, 2 and 3 are single-socket servers for value, medium and high-end markets respectively; 4 and 5 are 2 socket value servers with 6 for medium and 7 for high-end; 8 indicates a special server (for example one which can be configured as a 2 or a 4-socket machine); 9 indicates a 4 socket server (Dell does not currently complete in the 8-way marketplace).
  • The next digit indicates the generation number (0 for 10th, 1 for 11th, 2 for 12th generation). With a new generation every couple of years or so, resetting the clock to zero should give Dell around 20 years before they need to revisit this decision!
  • Finally, Intel servers end with 0 whilst AMD servers end with 5.

There is another complication though – those massive cloud datacentres operated by Microsoft, Amazon, et al use custom servers – and some of them come from Dell.  In that scenario, the custom servers don’t need to be resilient (the cloud provides the resilience) but Dell has now brought similar servers to market for those who want specialist, high-volume servers, albeit with a slightly lower MTBF than standard PowerEdge boxes.  So, for example: the C1100 is a 2-way, 1U server that can take up to 18 DIMMs for memory-intensive applications; the C2100 is a 2-way, 2U server with room for 12 disks (and 18DIMMs); whilst the C6100 crams four 2-way blades into 2U enclosure, with room for 12 DIMMs and up to 24 2.5″ disks!

A new e-mail etiquette?

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve just got off a late night train home from London where I spotted someone’s discarded Evening Standard, featuring an interesting article by Philip Delves Broughton, examining how the way in which we deal with e-mail reveals our professional characters. The full article makes for interesting reading but I thought I’d quote from the side-panel on the new e-mail etiquette here:

  • After the initial sales pitch, follow up by e-mail and phone.
  • Beyond that, pestering will make you seem needy.
  • Should you be looking for a job and get no response, reframe the pitch with something that will entice your potential employer – a fact about their competitor, an article of interest. Banging on about yourself is bad form.
  • A voicemail not answered is better than an e-mail ignored.
  • If you are swamped with e-mails, and don’t want to appear rude, consider an e-mail template that says no nicely.
  • However, do not resort to the standard, unhelpful Out Of Office Reply. It effectively says, I’m not here so your problems can go to hell.

Writing SOLID code

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not a coder: the last time I wrote any functioning code was at Uni’ in the early 1990s.  I can adapt other people’s scripts and, given enough time, write my own. I can knock up a bit of XHTML and CSS but writing real applications?  Nope.  Not my game.

Every now and again though, I come up against a development topic and I do find them interesting, if a little baffling.  I guess that’s how devs feel when I talk infrastructure.

From 2004 to 2005, I worked for a company called Conchango (who are now part of EMC Consulting) – I had a great time there, but the company’s focus had shifted from infrastructure to web design agency and Java/.NET development (which, by the way, they were rather good at – with an impressive client list).  Whilst I was there, it seemed that all I heard about was “agile” or “XP” (extreme programming… nothing to do with Windows XP) and these were approaches that were taking the programming world by storm at the time.

Then, a few weeks ago, I had a similar experience at an Edge User Group meeting, where Ian Cooper (a C# MVP) was talking about the SOLID principles.  Not being a coder, most of this went right over my head (I’m sure it would make perfect sense to my old mates from Uni’ who did follow a programming route, like Steve Knight), but it was interesting nonetheless – and, in common with some of the stuff I heard about in my days at Conchango, I’m sure the basic principles of what can go wrong with software projects could be applied to my infrastructure projects (with a little tweaking perhaps):

  • Rigidity – difficult to change.
  • Fragility – change one thing, breaks something else.
  • Immobility – e.g. one part of the solution is welded into the application.
  • Viscosity – wading through treacle, maintaining someone else’s software.
  • Needless complexity – why did we do it that way?
  • Needless repetition – Ctrl+C Ctrl+V is not an ideal programming paradigm!
  • Opacity – it made sense to original developer… but not now!

Because of these issues, maintenance quickly becomes an issue in software development and Robert C Martin (@unclebobmartin – who had previously led the group that created Agile software development from Extreme programming techniques) codified the SOLID principles in his Agile Principles, Patterns and Practices in C# book (there is also a Hanselminutes podcast where Scott Hanselman and “Uncle Bob” Martin discuss the SOLID principles and a follow-up where they discuss the relevance of SOLID).  These principles are:

  • Single responsibility principle
  • Open/closed principle
  • Liskov substitution principle
  • Interface segregation principle
  • Dependency inversion principle

This is the point where my brain starts to hurt, but please bear with me as I attempt to explain the rest of the contents of Ian’s presentation (or listen to the Hanselminutes podcast)!

The single responsibility principle

This principle states that a class should have only one reason to change.

Each responsibility is an axis of change and, when the requirements change, that change will be manifested through a change in responsibility among the classes. If a class assumes more than one responsibility, that class will have more than one reason to change, hence single responsibility.

Applying this principle gives a developer a single concept to code for (also known as separation of concerns) so, for example, instead of having a GUI to display a purchase order, this may be separated into GUI, controller, and purchase order: the controller’s function is to get the data from the appropriate place, the GUI is only concerned with displaying that data, and the purchase order is not concerned with how it is displayed.

The open/closed principle

This principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.

The thinking here is that, when a single change to a program results in a cascade of changes to dependent modules, the design becomes rigid but, if the open/closed principle is applied well, further changes are achieved by adding new code, not by changing old code that already works.

Some may think that it’s impossible to be both open to extension and closed to change: the key here is abstraction and composing.

For example, a financial model may have different rounding rules for different markets.  This can be implemented with local rounding rules rather than changing the model each time the model is applied to a different market.

The Liskov substitution principle

This principle (attributed to Barbara Liskov) states that subtypes most be substitutable for their base types.  Unfortunately, attempts to fix Liskov substitution problems often result in violations of the open/closed principle but, in essence, the validity of a model can be expressed only in terms of its clients so, for example, if there is a type called Bird (which has got wings and can fly), where what happens to penguin and emu when an attempt is made to implement the fly method? We need to be able to call fly for a penguin and handle it appropriately so there are effectively two solutions: change the type hierarchy; or refactor the type to express it differently – fly may become move, or we could have a flightless bird type and a running bird type.

The interface segregation principle

The interface segregation principle says that clients should not be forced to depend on methods they do not use.

Effectively, this means that clients should not be affected by changes that don’t concern them (i.e. fat types couple disparate parts of the application).  In essence, each interface should have smallest set of features that meet client requirements but this means it may be necessary to create multiple interfaces within a class.

The dependency inversion principle

The dependency inversion principle states that high level models should not depend on low level models – both should depend on abstractions. In addition, abstractions should not depend on details. Details should depend upon abstractions.  This is sometimes known as the Hollywood principle (“Don’t call us, we’ll call you”).  So, where is the inversion?  If a class structure is considered as a tree with the classes at the leaves and abstraction at the trunk, we depend on the tree, not the leaves, effectively inverting the tree and grasping by the roots (inversion of control).

Summing it up

I hope I’ve understood Ian’s presentation enough to do it justice here but to sum it up: the SOLID principles help to match computer science concepts such as cohesion and polymorphism to actual development in practice. Or, for dummies like me, if you write your code according to these principles, it can avoid problems later when the inevitable changes need to be made.

As an infrastructure guy, I don’t fully understand the details, but I can grasp the high level concepts and see that it’s not really so different to my world.  Look at a desktop delivery architecture for a large enterprise:

  • We abstract data from user objects, applications, operating system and hardware (which is possibly virtualised) and this gives us flexibility to extend (or substitute) parts of the infrastructure (well, that’s the idea anyway).
  • We only include those components that we actually want to use (no games, or other consumer functionality). 
  • We construct servers with redundancy then build layers of software to construct service platforms (which is a kind of dependency inversion). 

OK, so the infrastructure links may seem tenuous, but the principle is sound.  Sometimes it’s good for us infrastructure guys to take a look at how developers work…