Incorrect side-by-side configuration caused by missing runtime libraries

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Just before the weekend, I was trying to run an application on a 64-bit installation of Windows Server 2008 and was presented with a strange error:

This application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more details.

I know that side-by-side is something to do with avoiding DLL hell (by not dumping all the DLLs in the same folder with the consequences of one application overwriting another’s libraries) but I didn’t have a clue how to fix it and the application event log didn’t help much:

Log Name: Application
Source: SideBySide
Date: 15/08/2008 18:00:10
Event ID: 33
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer:
computername.domainname.tld
Description:
Activation context generation failed for “C:\
foldername\applicationname.exe”. Dependent Assembly Microsoft.VC90.CRT,processorArchitecture=”x86″,publicKeyToken=”1fc8b3b9a1e18e3b”,type=”win32″,version=”9.0.21022.8″ could not be found. Please use sxstrace.exe for detailed diagnosis.

Thankfully, Junfeng Zhang wrote a comprehensive blog post about diagnosing side by side failures. It’s a bit too developery for me but I did at least manage to follow the instructions to produce myself a sxstrace:

=================
Begin Activation Context Generation.
Input Parameter:
        Flags = 0
        ProcessorArchitecture = AMD64
        CultureFallBacks = en-US;en
        ManifestPath = C:\foldername\applicationname.exe
        AssemblyDirectory = C:\foldername\
        Application Config File =
-----------------
INFO: Parsing Manifest File C:\foldername\applicationname.exe.
        INFO: Manifest Definition Identity is (null).
        INFO: Reference: Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8"
INFO: Resolving reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
        INFO: Resolving reference for ProcessorArchitecture x86.
                INFO: Resolving reference for culture Neutral.
                        INFO: Applying Binding Policy.
                                INFO: No publisher policy found.
                                INFO: No binding policy redirect found.
                        INFO: Begin assembly probing.
                                INFO: Did not find the assembly in WinSxS.
                                INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft.VC90.CRT\9.0.21022.8__1fc8b3b9a1e18e3b\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Did not find manifest for culture Neutral.
                        INFO: End assembly probing.
        ERROR: Cannot resolve reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
ERROR: Activation Context generation failed.
End Activation Context Generation.

=================
Begin Activation Context Generation.
Input Parameter:
        Flags = 0
        ProcessorArchitecture = Wow32
        CultureFallBacks = en-US;en
        ManifestPath = C:\foldername\applicationname.exe
        AssemblyDirectory = C:\foldername\
        Application Config File =
-----------------
INFO: Parsing Manifest File C:\foldername\applicationname.exe.
        INFO: Manifest Definition Identity is (null).
        INFO: Reference: Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8"
INFO: Resolving reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
        INFO: Resolving reference for ProcessorArchitecture WOW64.
                INFO: Resolving reference for culture Neutral.
                        INFO: Applying Binding Policy.
                                INFO: No publisher policy found.
                                INFO: No binding policy redirect found.
                        INFO: Begin assembly probing.
                                INFO: Did not find the assembly in WinSxS.
                                INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft.VC90.CRT\9.0.21022.8__1fc8b3b9a1e18e3b\Microsoft.VC90.CRT.DLL.
                                INFO: Did not find manifest for culture Neutral.
                        INFO: End assembly probing.
        INFO: Resolving reference for ProcessorArchitecture x86.
                INFO: Resolving reference for culture Neutral.
                        INFO: Applying Binding Policy.
                                INFO: No publisher policy found.
                                INFO: No binding policy redirect found.
                        INFO: Begin assembly probing.
                                INFO: Did not find the assembly in WinSxS.
                                INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft.VC90.CRT\9.0.21022.8__1fc8b3b9a1e18e3b\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Did not find manifest for culture Neutral.
                        INFO: End assembly probing.
        ERROR: Cannot resolve reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
ERROR: Activation Context generation failed.
End Activation Context Generation.

I don’t understand most of that trace but I can see that it’s trying to find a bunch of resources named Microsoft.VC90.CRT.* and a search of my system suggests they are missing. Microsoft VC sounds like Visual C++ and v9 would be Visual Studio 2008. Checking back at the original developer’s website, I saw that he suggested to someone else experiencing problems that they might need the Microsoft Visual C++ 2008 redistributable package. I thought that the whole point of having the Microsoft .NET Framework on my PC was so that .NET applications would run, regardless of the language they were developed in (if there are any developers reading this, please feel free to leave a comment on this because I’m out of my depth at this point) but I downloaded the latest x64 version and installed it on my system.

No change (same error).

I realised that I was using the latest (SP1) version (v9.0.30729.17) and perhaps I needed the original one (v9.0.21022) as that’s the version number in the systrace log. So I removed the SP1 version and installed the original redistributable package instead.

Still no change.

I had the C++ source code, so I considered recompiling the application but I found that there was no compiler on my system (unlike for C#) and so I needed to install one of the Visual Studio Express Editions and would take a while. So I thought about other options.

It turned out that, even though I was running on 64-bit Windows, I needed to install a 32-bit redistributable. Don’t ask me why (that’s another developer question – the references to GAC_32 and Win32 in the sxstrace probably provide a clue) but it worked – and it didn’t matter whether I used the original or the SP1 version of the Microsoft Visual C++ 2008 redistributable package (so I used SP1).

Now the application runs as expected. It’s got me thinking though… I really should learn something about .NET development!

Setting up a digital photography workflow: preferences for Adobe Bridge, Camera Raw and Photoshop CS3

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back, I wrote about Adobe Photoshop CS3 from a photographer’s perspective and in this post I’ll outline some of the application preferences for Bridge (CS3), Camera Raw (4.x) and Photoshop (CS3) that may be useful when setting up a digital photography workflow (with thanks to David Tunnicliffe, who originally provided me with the bulk of this information):

  • In general (at an operating system level):
    • Add some memory (noting that each PC or Mac will have a limit in the maximum amount of memory it can support and that 32-bit operating systems can only access approximately 3.2GB).
    • Resist the temptation to compress hard disk drives – disks are relatively inexpensive and the available storage capacity is increasing rapidly.
  • Bridge (CS3: my installation is at v2.0.0.975; some extra information here relating to features introduced at v2.1):
    • General: adjust the background colour – dark grey will generally provide a non-distractive background; if Bridge is to be used for importing images when a camera is connected, select the appropriate checkbox under Behavior; remove items from the Favorites list that will not be used (e.g. Start Meeting).
    • Thumbnails: enable Adobe Camera Raw for JPEG and TIFF file handling; 400MB is the default maximum file size for the creation of thumbnails and should be more than enough for most photographers (unless they scan images at very high resolutions); high quality thumbnails can be useful; however conversion on preview is an intensive operation and should be avoided.
    • Playback: few photographers will be interested in media playback options (new in v2.1 – not present in v2.0).
    • Metadata: select/deselect as required – few photographers will need audio, video, or DICOM; GPS is becoming more relevant with the advent of location-based services.
    • Labels: edit the description to match the colour coding system in use – together with ratings, these can be useful for sorting.
    • Keywords: Can be used to build a hierarchy of keywords (new in v2.1 – not present in v2.0).
    • File type associations: edit if required to change the application that is associated with a given file type. Generally, these may be left at their defaults/
    • Cache: Clear the cache if problems are experienced with thumbnails (new in v2.1 – not present in v2.0).
    • Inspector: not really relevant unless using Adobe VersionCue to manage workflow.
    • Startup Scripts: these can be disabled if not used but I have left them at the default settings (removing scripts will accelerate application load times).
    • Advanced: this is the place to clear the cache if there are issues with thumbnail display; international settings for language and keyboard are also set here; software rendering should be avoided if there is suitable graphics hardware available to do the work instead.
    • Adobe Stock Photos: probably of limited use to people who would like to sell their work! In fact, the service was discontinued in April 2008 and can be uninstalled from Bridge.
    • Meetings: Only relevant with Adobe Acrobat Connect.
  • Camera Raw (my installation has been updated to v4.5.0.175; the version originally shipped with my copy of Photoshop CS3 was v4.0):
    • Preferences (available in other Photoshop applications whilst loaded): save image settings in sidecar (.XMP) files; apply sharpening to preview images only; Camera Raw cache defaults to 1GB and can be purged if issues are experienced; JPEG and TIFF handling selected (not available in v4.0).
    • Main interface: ensure Preview is selected.
    • Workflow options (link at the bottom of the ACR window): Adobe RGB (1998) is probably the best colour space for most photographers (Sean T. McHugh explains more about the comparison between sRGB and Adobe RGB 1998); use 16-bits per channel; use size and resolution to upscale (for better results than applying interpolation in Photoshop).
  • Photoshop (CS3: v10.0):
    • General: Color picker should be set to Adobe; Image interpolation should be selected according to purpose but bicubic smoother is probably the most useful for photographers.
    • Interface: Select remember palette locations.
    • File handling: select the prefer Adobe Camera Raw for filetype options if you want to open JPEG or RAW files in Adobe Camera Raw (recommended); increase the length of the recent file list if required; disable version cue if not required.
    • Performance: Photoshop is memory hungry but don’t let it take more than 70% of the available RAM (that is the default) – use the ideal range as a guide; adjust scratch disk settings if you have multiple disks available; enable 3D acceleration if supported by the GPU; increase the number of history states if possible.
    • Transparency and Gamut: ensure opacity is set to 100% (default setting).
    • Units and rulers: minimum print resolution for new documents should be 300ppi (72ppi is fine for screen).
    • Plug-ins: this is only relevant if you have plug-ins for an old version of Photoshop or in a strange location.
    • Cursors; Guides, Grid, Slices and Count; Type: Nothing to change.

Of course, this is just scraping the surface – these applications alone are probably not the complete workflow and each of them offers far more functionality than most photographers will require. If you’re using the CS3 applications for graphic design work, then you’ll probably have a totally different setup.

In case the UK Government’s record on IT wasn’t already bad enough…

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not sure how I missed this one, but the UK Government‘s latest public relations stunt is the Prime Minister’s blog. Yes, that’s right – since Monday, Gordon Brown has been blogging at Number10.gov.uk (for those outside the UK, number 10 Downing Street is the traditional London home and office for the Prime Minister of the day – and in case you hadn’t noticed, Tony Blair got out while the going was reasonably good and left the former Chancellor as a caretaker PM until the next election… [sorry, nearly broke my own golden rule for no politics on this blog there]).

Given this administration’s record on matters of an IT nature, I would hope they had better things to do (of course, the PM is not churning this stuff out himself). With Twitter, Flickr, and YouTube feeds (as well as the Brightcove-based Number 10 TV) he wouldn’t have much time left to run the country [on second thoughts, maybe that is what he is doing… it would explain a lot about the state of the nation…].

Of course the site is, at worst, a thinly veiled PR exercise and, at best, an attempt to engage an increasingly disillusioned electorate in discussion with the Government of the day. After all, the standard response to most e-petitions seems to be a condescending e-mail from the appropriate department which can usually be paraphrased as “yeah, yeah, we heard you but we’re still going to carry on regardless”.

Still, at least they’re using WordPress as their CMS (albeit without any kind of acknowledgement)!

Windows 7 blog launched

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

After a year of speculation about what will, or won’t, be included in the next version of Windows, it looks like Microsoft might be getting ready to tell us a bit more. Yesterday they launched a new blog called Engineering Windows 7 (thanks to Dave Saxon for alerting me). As the title suggests, it’s all about putting together the next version of Windows and is probably worth keeping an eye on.

So, you want to be an infrastructure architect?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the years I’ve had various jobs which have been basically the same role but with different job titles. Officially, I’ve been a Consultant, Senior Consultant, Project Manager, Senior Technical Consultant, Senior Customer Solution Architect (which would have been a Principal Consultant in the same organisation a few years earlier but management swapped the “architect” word for a drop in implied seniority) but if you ask me what I am, I tend to say I’m an infrastructure architect.

Issue 15 of The [MSDN] Architecture Journal included an article about becoming an architect in a systems integrator. I read this with interest, as that’s basically what I do for a living (believe me, I enjoy writing about technology but it will be a long while before I can give up my day job)!

The Architecture Journal tends to have an application focus (which is only natural – after all, it is produced by developer-focused group in a software company) and I don’t know much about application development but I do know how to put together IT solutions using common off the shelf (COTS) applications. I tend to work mostly with Microsoft products but I’ve made it my business to learn about the alternatives (which is why I’m a VMware Certified Professional and an Red Hat Certified Technician). Even so, I’m stuck at a crossroads. I’m passionate about technology – I really like to use it to solve problems – but I work for a managed services company (an outsourcer in common parlance) where we deliver solutions in the form of services and bespoke technology solutions are not encouraged. It seems that, if I want to progress in my current organisation, I’m under more and more pressure to leave my technical acumen behind and concentrate on the some of the other architect’s competencies.

Architect competencies

I’m passionate about technology – I really like to use it to solve problems

I understand that IT architecture is about far more than just technology. That’s why I gained a project management qualification (since lapsed, but the skills are still there) and, over the years, I’ve developed some of the softer skills too – some which can be learnt (like listening and communications skills) – others of which only come with experience. I think it’s important to be able to dive into the technology when required (which, incidentally, I find helps to earn the respect of your team and then assists with the leadership part of the architect’s role) but just as important to be able to rise up and take a holistic view of the overall solution. I know that I’m not alone in my belief that many of the architects joining our company are too detached from technology to truly understand what it can do to address customers’ business problems.

Architect roles
OK, so I’m a solutions architect who can still geek out when the need arises. I’m still a way off becoming an enterprise architect – but do I really need to leave behind my technical skills (after having already dumped specialist knowledge in favour of breadth)? Surely there is a role for senior technologists? Or have I hit a glass ceiling, at just 36 years of age?

I’m hoping not – and that’s why I’m interested in the series of webcasts that Microsoft Consulting Services are running over the next few months – MCS Talks: Enterprise Architecture. Session 1 looked at infrastructure architecture (a recorded version of the first session is available) and future sessions will examine:

  • Core infrastructure.
  • Messaging.
  • Security and PKI.
  • Identify and access management.
  • Desktop deployment.
  • Configuration management.
  • Operations management.
  • SharePoint.
  • Application virtualisation.

As should be expected, being delivered by Microsoft consultants, the sessions are Microsoft product-heavy (even the session titles give that much away); however the intention of the series is to connect business challenges with technology solutions and the Microsoft products mentioned could be replaced with alternatives from an other vendors. More details on the series can be found on the MCS Talks blog.

This might not appeal to true enterprise architects but for those of us who work in the solution or technical architecture space, this looks like it may well be worth an hour or so of our time each fortnight for the rest of the year. At the very least it should help to increase breadth of knowledge around Microsoft infrastructure products.

And, of course, I’ll be spouting forth with my own edited highlights on this blog.

Yes, you can use all the processing power on a multi-core system

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve heard a few comments recently about it not being worth buying multi-core processors because it’s impossible to harness all of the processing power and I have to say that is a pile of stuff and nonsense (putting it politely).

Well, it is nonsense if the operating system can recognise multiple processors (and Windows NT derivatives have had multi-processor support for as long as I can remember) but it also has a lot to do with the software in use. If everything is single threaded (it shouldn’t be these days), then the operating system scheduler can’t spread the threads out and make the most of it’s processing capabilities.

Anyway, I’ve been maxxing out a 2.2GHz Core2Duo-based notebook PC for the last couple of days with no difficulties whatsoever. My basic workload is Outlook 2007, Office Communicator 2007, Internet Explorer (probably a few windows, each with a couple of dozen tabs open) and the usual bunch of processes running in the background (anti-virus, automatic updates, etc.). Yesterday, I added three virtual machines to that mix, running on a USB2-attached hard drive (which, unlike a Firewire drive, also requires a big chunk of processing) as well as TechSmith SnagIt, as I was testing and documenting a design that I was working on and that did slow my system down a little (the first time there has been any significant paging on this system, which runs 64-bit Windows Server 2008 and has 4GB of RAM).

Then, today, I was compressing video using Camtasia Studio 5 (another TechSmith product) and, despite having closed all other running applications besides a couple of Explorer windows, it was certainly making full use of my system as the screenshots below show. Watch the CPU utilisation as I start to render my final video output:

Windows Task Manager showing increased CPU utilisation as video rendering commences

during rendering:

Windows Task Manager showing increased CPU utilisation as video rendering commences

and after the task was completed, when CPU activity dropped to a more normal level:

Windows Task Manager showing increased CPU utilisation as video rendering commences

Of course, a lot of this would have been offloaded to the GPU if I had a decent graphics card (this PC has an Intel GMA965 controller onboard) but I think this proves that multiple processor cores can be fully utilised without too much effort…

Changing the product key for Microsoft Office applications without reinstalling

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The notebook PC that I use for work has Microsoft Office Enterprise 2007 installed. Office Enterprise includes most of the applications that I need but not Visio, so I also have Microsoft Office Visio 2007.

The 2007 office products can be used a certain number of times before activation is required and, unfortunately, when I tried to activate my copy of Visio I found that the product key had been used too many times and activation failed (the Office Enterprise key was fine). After watching the number of trial uses of the product slowly decrement, I needed to change the product key, but didn’t want to have to re-install Visio.

Microsoft knowledge base article 895456 provides details for changing the product key for the 2007 Office system (as well as other releases).

It’s important to note that:

  • For a 64-bit version of Windows, the registry location to edit will be HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Office\version\Registration rather than HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\version\Registration.
  • If multiple Office products are installed, multiple GUIDs will appear in the registry – the important entry to look at will be the ProductName – in my case one GUID had a ProductName of Microsoft Office Visio Professional 2007 and the other had a ProductName of Microsoft Office Enterprise 2007.

Once the correct GUID had been tracked down and the associated DigitalProductID and ProductID entries removed, I fired up Visio, entered a different product key and successfully activated the software.

Failed power supply causes impromptu wireless network upgrade

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Two-and-a-half years ago, I upgraded my wireless network in order to move to 802.11g and to implement some half-decent Wi-Fi security but, last Friday, just as I was packing up the car for a weekend away, I noticed that my PC had lost contact with the mail server. Then I saw there were no lights on my wireless access point. This was not good news.

I couldn’t fix it quickly and running a cable was not an option either as it would have meant leaving the house unsecured all weekend. So, I just had to accept that I had no DNS, no DHCP, and that the mail server would be offline for the weekend.

When I got home last night, I set up a temporary (wired) connection and thought about how to fix the Wi-Fi – it seemed I had a few options:

  • Buy a new DC power adapter for my D-Link DWL-2000AP+ – inexpensive but the D-Link was a cheap access point – a new DC adapter could cost almost as much as the unit is worth and if the power adapter has blown up, the main unit could be next.
  • Buy a new access point (and optionally move up to 802.11pre-n) – a new access point could be good, but pre-n equipment is still quite expensive – and I’ve never been that happy with pre-anything standards, even back in the days of 56Kbps modems. Add to that the fact that I have a mixture of 802.11g and 802.11n equipment (mostly built in to computers) – and the “g” kit would slow an “n” network down to 54Mbps.
  • Replace my individual router and access point with a combined wireless-modem-router (like the Netgear DG834G that one of my friends lent me – a left-over from his disastrous encounter with Virgin Media’s ADSL “service” – or one of the Draytek devices that I’ve heard so many good things about) – but my Solwise ADSL router is still going strong (aside from the occasional reboot) and I’d have to reconfigure all my firewall rules.
  • Dump Wi-Fi in favour of HomePlug AV technologies – potentially faster (at least faster than 802.11g) but also quite expensive, still a relatively immature technology and, based on most of the reviews I’ve seen, highly dependant upon the quality of the wiring in the house.

In the end, I decided to splash out on a new access point – and this time I got the one that I thought about in 2005 but didn’t want to spend the money on – a Netgear ProSafe WG102. I got mine from BroadbandBuyer for a touch over £80 (the added bonus was that they are only 7 miles away from my house, had them in stock, and I could collect) so by late morning my Wi-Fi was back online and the temporary cables down the stairs were gone and the garage door was closed again.

Netgear ProSafe WG102After having set this up, I realised that this is what I should have done first time around – Netgear’s ProSafe range is aimed at small businesses but is still reasonably inexpensive – and so much better than the white plastic consumer rubbish that they churn out (or the D-Link access point that I’ve been using). The WG102 is well built, has a really straightforward web interface for management (as well as SNMP support) and supports all the wireless options that I would expect in a modern access point, including various security options and IntelliRF for automatic adjustment of power transmission and channel selection. I’m using WPA2 (PSK) but the WG102 does include RADIUS support. It’s also got a nice big antenna and I’ve switched off 802.11b to prevent the whole network from being slowed down by one old “b” device. I also use MAC address filtering (easy enough to get around but nevertheless another obstacle in the way of a would-be attacker) but the best features are the ones I haven’t implemented yet – like multiple SSIDs and VLANs for granular user access. If I put a VLAN-capable switch between the access point and my router, I could provide a hotspot for my street but still run my own traffic over it’s own VLAN. I guess VLAN-hopping would be a potential attack vector but my Wi-Fi traffic would be encrypted anyway and there’s another firewall between the wireless network and my data. If that switch supported Power over Ethernet (PoE) then I could even manage if the WG102 lost it’s power supply (it has PoE support too).

The WG102 is certainly not the least expensive access point I could have bought but it seems to be money well spent. It includes a bunch of features that are generally only found devices intended for the enterprise market but comes at a small business price. I should have bought this years ago.

When “non-destructive” edits start making changes to the original files…

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days back, I was extolling the virtues of the Sidecar (.XMP) file format that Adobe uses for storing updated metadata and edits to digital images in Adobe Camera Raw (ACR):

“It turns out that Bridge (together with ACR) is exactly what I needed to organise my images, open them in ACR (and optionally Photoshop) to perform non-destructive edits, with the changes (and associated metadata) stored in Sidecar (.XMP) files alongside the original image (avoiding the need to maintain multiple copies of images.”

Well, soon afterwards I found out that, for raw image files, ACR does indeed create XMP files (which are also used by Adobe Photoshop Lightroom and are visible in Bridge) but, if ACR is used for JPEGs (or TIFFs), then the original files are modified.

In a blog post from February 2007, Adobe’s John Nack explains why non-destructive edits to JPEGs may be considered an oxymoron – basically Adobe appends the metadata that would normally be stored in the .XMP file to the JPEG. That means that, if I view an edited file using an Adobe product, it can see the changes but other viewers are unaware of the additional data and ignore it.

The accompanying images show a JPEG file that I opened in ACR (via Bridge) to adjust the exposure and to straighten the image. Bridge shows the updated file as being 2848×1894 pixels in size and the image edits are visible in the preview:

JPEG file edited with Adobe Camera Raw 4.0 and viewed in Bridge CS3

Meanwhile, the Mac OS X Finder (or any other image viewer) sees the original 3008×2000 pixel image, still underexposed and leaning to one side:

JPEG file edited with Adobe Camera Raw 4.0 and viewed in the Mac OS X Finder

If, like me, you like your digital asset management software to maintain the original images untouched and only perform non-destructive edits, then this may come as a bit of a shock.

Customising a Cisco 79xx IP Phone: directory services

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m still working on customising the the Cisco 7940 I use with SIP firmware for VoIP calls and one of the items that’s now working well is the directory services functionality.

At the most basic level, the directory_url directive may be set in one of the SIP configuration files (either SIPDefault.cnf or SIPmacaddress.cnf), for example:

directory_url: ”http://webserver/directory.xml”

The contents of the directory.xml file are actually quite simple:

<CiscoIPPhoneDirectory>
  <Title>IP Telephony Directory</Title>
  <Prompt>People reachable via VoIP</Prompt>
  <DirectoryEntry>
    <Name>Bob</Name>
    <Telephone>1234</Telephone>
  </DirectoryEntry>
  <DirectoryEntry>
    <Name>Joe</Name>
    <Telephone>1357</Telephone>
  </DirectoryEntry>
  <DirectoryEntry>
    <Name>Operator</Name>
    <Telephone>0</Telephone>
  </DirectoryEntry>
</CiscoIPPhoneDirectory>

The trouble with this is that it’s just a static file. If I have a large directory, then I need to keep it up-to-date. That’s where a directory service comes into play. The Open 79xx XML Directory looks useful but it’s another application to install and manage on my infrastructure. I already have a directory (Microsoft Active Directory), so I thought it would be great if a piece of code could query the AD and output the file in a format that the 7940 understands.

Luckily I found such a piece of code, courtesy of a message posted to the Asterisk Users forum back in 2004 by Jeff Gustafson:

<?php
$ds=ldap_connect("ldapserver");  // must be a valid LDAP server!

if ($ds) {
  $r=ldap_bind($ds);  // this is an "anonymous" bind, typically read-only access

  $sr=ldap_search($ds, "ou=People,dc=domainname,dc=com",
"telephoneNumber=*");
  echo "<CiscoIPPhoneDirectory>\n";
  echo "<Title>IP Telephony Directory</Title>\n";
  echo "<Prompt>People reachable via VoIP</Prompt>\n";

  $info = ldap_get_entries($ds, $sr);

  for ($i=0; $i<$info["count"]; $i++) {
    echo "<DirectoryEntry>\n";
    echo "<Name>" . $info[$i]["cn"][0] . "</Name>\n";
    echo "<Telephone>" . $info[$i]["telephonenumber"][0] .
"</Telephone>\n";
    echo "</DirectoryEntry>\n";
  }

  echo "</CiscoIPPhoneDirectory>";
  ldap_close($ds);

} else {
  echo "error";
}
?>

Jeff’s code is great (my PHP skills are certainly not good enough to have written this myself) but Active Directory has an attribute for IP phone numbers (ipPhone), so I made a couple of edits to change the phone prompts and to make the LDAP query search on the ipPhone attribute:

<?php
$ds=ldap_connect("domaincontroller.domainname.tld");  // must be a valid LDAP server!

if ($ds) {
  $r=ldap_bind($ds); // this is an "anonymous" bind, typically read-only access

  $sr=ldap_search($ds, "ou=directorycontainer,dc=domainname,dc=tld",
"ipphone=*");
  echo "<CiscoIPPhoneDirectory>\n";
  echo "<Title>IP Telephony Directory</Title>\n";
  echo "<Prompt>Active Directory Users</Prompt>\n";

  $info = ldap_get_entries($ds, $sr);

  for ($i=0; $i<$info["count"]; $i++) {
    echo "<DirectoryEntry>\n";
    echo "<Name>" . $info[$i]["displayname"][0] . "</Name>\n";
    echo "<Telephone>" . $info[$i]["ipphone"][0] .
"</Telephone>\n";
    echo "</DirectoryEntry>\n";
  }

  echo "</CiscoIPPhoneDirectory>";
  ldap_close($ds);

} else {
  echo "error";
}
?>

I still needed a couple of tweaks to get this working though – not to the script, just to: the webserver I used to serve it; to Active Directory; and finally to the phone configuration.

First up, you need a web server with PHP installed (I used PHP 5.2.6 on IIS 6.0). This also needs the LDAP extension to be enabled by uncommenting extension=php_ldap.dll in php.ini. The extensions folder (e.g. C:\phpinstallationfolder\extensionfolder) also needs to be appended to the %path% system variable.

The script is actually for a generic LDAP directory (nothing wrong with that) but recent versions of Active Directory do not allow anonymous access by default. Daniel Petri has a detailed article on anonymous LDAP operations in Windows 2003 AD and that gave me the information that I needed to open up the parts of the directory that I wanted the script to read – basically: setting the 7th bit of the dsHeuristics flag on CN=Directory Service,CN=Windows NT,CN=Services,DC=domainname,DC=tld to 2 on the forest root domain; waiting for replication to complete; granting ANONYMOUS LOGON read access on the appropriate objects and List Contents access on the OU that contains the object(s). Alternatively, it should be possible to edit the script to use an authenticated logon (and sorting by surname wouldn’t go amiss either) but it’s getting late now and that will have to wait for another day! In the meantime, Geoff Jacobs’ post on creating a personal directory for the Linksys SPA942 using LDAP should provide some inspiration.

Last, but by no means least, the directory_url directive needs to be edited to reflect the name of the PHP script instead of the original static XML, for example:

directory_url: ”http://webserver/directory.php”

In order to pick up the changes, the phone will need a reset.

Now, when I access the external directory from the phone using the directory button and option 5, I’m presented with a list of contacts from Active Directory. Furthermore, because the web server uses dynamic content, the details are as current as the directory server that it refers to.