Technology actually saved me some time this morning!

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ll be spending the next couple of days at Fujitsu Laboratories’ European Technology Forum listening to presentations on some of the new and upcoming technological advances that could well be making their way into our daily lives (disclosure: I work for Fujitsu but that has no bearing on the contents of this blog post). I had considered driving to London (and leaving the car at my hotel) but, in the end, I decided to take the train. As the state of our railways is something that us Brits spend a lot of time complaining about (since long before privatisation), it’s nice to have something positive to write about my journey this morning.

A few years ago, catching the train meant a weekly trip to the bank to obtain sufficient coinage to feed the parking machines at the station (even when credit card payment was introduced, not all of the machines could accept cards), getting up stupidly early to make the 12 mile trip from the small town where I live to my nearest station, parking the car (if there were any spaces left) and joining a queue to buy a ticket. All of this meant that, in order to ensure I was able to catch a particular train, I needed to allow 45 minutes to an hour.

This time, I used the online booking system to buy my tickets in advance (although, because it was only a few hours in advance, there was no option to have a ticket mailed to me but I did have the collection reference sent over by SMS). Then, when I got to the station, I found a parking space and, instead of feeding money or a credit card into the machine, I used the pay by phone service to enter my location code, confirm my car registration, the length of my stay and my credit card details. I then collected my tickets from a machine and, around 30 minutes after leaving my house, was on the platform waiting for the 07:28 to arrive. Meanwhile, the parking system had e-mailed me a receipt ready for my expenses claim.

None of this is exactly rocket science, but it’s a step in the right direction and a huge timesaver. Looking to the future, there is no reason why the parking payment system couldn’t identify my location using GPS (as more and more mobile phone handsets become GPS enabled) and why I couldn’t print my own rail tickets from home (albeit without the magnetic stripe to operate the barriers at the station – although these could use an alternative access control technology).

Now, the fact that I had to pay £13 for parking my car on top of my £40 travel (when there was no public transport alternative to get to and from the station)… that’s an entirely separate issue about which I’m less happy…

Match your Java installation to your browser…

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I run 64-bit Windows 7 at work so, when installing the Sun Java Runtime Environment (JRE) in order to access some of my corporate applications, naturally I installed a 64-bit version of the JRE.

Application 1 ran OK, but application 2 (which is a usability nightmare at the best of times) refused to load.  Then, Dave Saxon was trying to access the same application (also from 64-bit Windows 7) and he realised what I had totally missed: I may be running 64-bit Windows but the default instance of Internet Explorer is 32-bit.  Sure enough, I ran a 64-bit version of Internet Explorer, accessed the application and it worked.

I haven’t tested if a 32-bit JRE installation would work with a 32-bit instance of Internet Explorer on 64-bit Windows but the key lesson here is to run up the appropriate browser architecture for the installed JRE version.

A few things for digital photographers to consider before upgrading a Mac to Snow Leopard

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

If you use a Mac, the chances are that you’ve heard about a new release of the Mac operating system – OS X 10.6 “Snow Leopard”.  I haven’t bought a copy yet, largely because I can’t really see any “must-have” features (increased security and improved performance is not enough – even at a low price), but mainly because I use my Macs for digital media work – primarily my digital photography workflow on the MacBook – and upgrading to a new operating system brings with it the risk that applications will fail to work (I already have problems with NikonScan on MacOS X 10.5 and 10.6 is likely to introduce some more issues).

If you are, like me, primarily using your Mac for digital photography then there are a few things, that it might be useful to know before upgrading to Snow Leopard:

I’m sure that I will move to Snow Leopard in time; however these notes may well be useful if you’re a photographer first and foremost and the whole idea about using a Mac was simplicity.  Don’t be fooled by the glossy cover – Snow Leopard may bite you – and, like all operating system upgrades, it needs to be handled with care.

Live Meeting audio control codes

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

“Webcasts”, “WebExes”, “Webinars” and “Live Meetings” are all horrible names for meetings conducted across the ‘net.  Admittedly these services cut down on travel (good for the environment and for the soul) and are better than voice conferences (yawn) but, sometimes, a face to face meeting is much more valuable.  Regardless of my personal views on these events, they seem to feature heavily in modern office life (I’ll be attending a few this week…) and I thought it might be useful to publish a few command codes for the phone conference service that is one of three methods for Live Meeting users to connect to the audio stream:

  • Chairperson codes:
    • #1 Participant roll call
    • #2 Participant count 
    • *2 Stop audio playback
    • *5 Mute/unmute participant lines
    • *7 Lock/unlock conference
    • *8 Record start/stop
    • ## End conference
  • Chairperson and participant codes 
    • *0 Operator assistance
    • #0 Conference help menu
    • *6 Mute/unmute own line 

These codes are from my BT MeetMe account; however I’m pretty sure that they are valid for the audio portion of any service based on Microsoft Office Live Meeting (the mute code certainly seems to work when I’m on Microsoft-hosted Live Meetings).

Moving to Linux from Solaris?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Oracle’s acquisition of Sun Microsystems has probably caused a few concerns for Sun customers and, today, Oracle message to Sun customersOracle reaffirmed their commitment to Sun’s Solaris operating system and SPARC-based hardware with a statement from Oracle CEO, Larry Ellison, that tells customers they plan to invest in the Solaris and SPARC platforms, including tighter integration with Oracle software.

Nevertheless, as one of my colleagues pointed out to me today, it costs billions to create new microprocessors and more and more customers are starting to consider running Unix workloads on industry standard x64 servers. With significant differences between the x64 and SPARC versions of Solaris, it’s not surprising that, faced with an uncertain future for Sun, organisations may consider Linux as an alternative and, a few weeks back, I attended a webcast given by Red Hat and HP that looked at some of the issues.

I’m no Linux or Unix expert but I made a few notes during that webcast, and thought they might be useful to others so I’m sharing them here:

  • Reasons to migrate to Linux (from Unix):
    • Tactical business drivers: reduce support costs (on end of life server hardware); increase capability with upgraded enterprise applications; improve performance and reduce cost by porting custom applications.
    • Strategic business drivers: simplify and consolidate; accelerate service development; reduce infrastructure costs.
  • HP has some tools to assist with the transition, including:
    • Solaris to Linux software transition kit (STK)– which , although aimed at migrating to HP-UX, this joint presentation between HP and Red Hat suggested using it to plan and estimate the effort in migrating to Linux with C and C++ source code, shell scripts and makefiles for tools that can scan applications for transition impacts.
    • Solaris to Linux porting kit (SLPK)which includes compiler tools, libraries, header files and source scanners to recompile Solaris applications for either Red Hat or SUSE Linux running on HP ProLiant servers.
  • The top 5 issues likely to affect a transition are:
    1. Complier issues – differing development environments.
    2. ANSI C/C++ compliance – depending on the conformance to standards of the source code and compiler, there may be interface changes, namespace changes, library differences, and warnings may become errors.
    3. Endianness – SPARC uses a big-endian system, Linux is little-endian.  This is most likely to affect data exchange/communications between platforms with access to shared memory and to binary data structures in files.
    4. Differences in commands, system calls and tools – whether they are user level commands (e.g. a Solaris ping will return a message that a host is alive whereas a Linux ping will continue until interrupted), systems management commands, system API calls, software management (for package management) or operating system layered products (e.g. performance management, high availability or systems management).
    5. ISV product migration with issues around the availability of Linux versions of software; upgrades; and data migration.
  • When planning a migration, the strategic activities are:
    • Solaris vs. Linux ecosystem analysis.
    • Functional application analysis.
    • Organisational readiness and risk analysis.
    • Strategic migration roadmap creation.
    • Migration implementation.
  • Because of the differences between operating systems, it may be that some built-in functions need to be replaced by infrastructure applications (or vice versa). Indeed, there are four main scenarios to consider:
    • Solaris built-in function to Linux built-in function (and although many functions may map directly others, e.g. virtualisation approaches, may differ).
    • Solaris infrastructure application to Linux built-in function.
    • Solaris infrastructure application to Linux infrastructure application.
    • Solaris built-in function to Linux infrastructure application.
  • Finally, when it comes to deployment there are also a number of scenarios to consider:
    • Consolidation: Many Sun servers (e.g. Enterprise 420R) to fewer industry standard servers (e.g. HP ProLiant).
    • Aggregation: Sun Fire V890/V490 servers to Itanium-based servers (e.g. HP Integrity).
    • Dispersion: Sun Fire E25K server(s) to several industry standard servers (e.g. HP ProLiant).
    • Cloud migration: Sun servers to Linux-based cloud offerings, such as those offered by Amazon, RackSpace, etc.

Many of these notes would be equally applicable when migrating between Unix variants – and at least there are tools available to assist. But, now I come to think of it, I guess the same approach can be applied to migrating from Unix/Linux to Windows Server… oh, look out, is that flaming torches and pitchforks I see being brandished in my direction?

Exporting images from Lightroom to Flickr

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Flickr logoAs I’m uploading a set of photos to Flickr to share with family and friends, it prompted me to finish a blog post I’ve been meaning to write for some months now – on the topic of exporting images from Lightroom to Flickr.

Since my friend Jeremy Hicks showed me Adobe Lightroom back in the spring, I’ve become hooked on the ease at which I can import, tag and post-process my images (with only a few minor annoyances around the way that images are handled when I take them into Photoshop for any advanced editing – thankfully most never need to go that far). But Lightroom is only half of the story – what about those images that I need to put on the web to share with others?

I use Flickr for this and I would like to export images directly from Lightroom to Flickr. Thanks to Lightroom’s extensible architecture, Jeffrey Friedl has written an Export to Flickr plugin (found via Adobe) and it does a good job but there is another way too. Thomas Bouve describes how, by creating an alias/shortcut the Flickr Uploadr application to the appropriate folder (Export Actions), you can select to open Flickr Uploadr as the post-processing editor, allowing the images to be uploaded immediately after export.

Enabling SNMP on my ADSL router

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been playing around with some network monitoring and management tools on my home network and so have been busily enabling Simple Network Management Protocol (SNMP) on a number of my devices, including my elderly Solwise SAR110 ADSL modem/router; however the router’s web interface doesn’t seem to have the ability to configure the SNMP agent.

I asked how to do this on the Solwise forums and the response was to use the command line. Sure enough, I located the Solwise SAR110 Advanced Reference Guide telnetted to the router’s internal interface, logged on, and issued the following commands:

create snmp comm community public ro

(to create a community called public with read only access.)

create snmp host community public ip ipaddress

(to allow a specified IP address to interrogate the device using the public community.)

get snmp host confirmed that the settings were correct.

Enabling traps to inform the SNMP manager of any events was already enabled by default (confirmed using get snmp trap); however the command would have been modify snmp trap enable (or modify snmp trap disable to disable traps).

In order to test the configuration, I ran Noël Danjou’s SNMPTest utility. This confirmed that my router was accessible via SNMP; although I’m not sure if the trap functionality is working as it should be… I certainly didn’t see any evidence of the “System up” trap being sent after resetting the router.

Finally, once I was sure that everything was working as expected, I issued the commit command to save the changes (and re-ran the tests to see if that was why the traps hadn’t worked).

It’s not very likely that anyone reading this blog is using such an ancient device; however the general principle holds true for many consumer devices. If the web interface doesn’t let you do what you want, see if there is command line access, typically via telnet or ssh.

Which service pack level is Windows Server 2008 R2 at?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those that remember Windows Server 2003 R2 may recall that it shipped on two disks: the first contained Windows Server 2003 with SP1 integrated; and the second contained the R2 features. When Windows Server 2003 SP2 shipped, it was equally applicable to Windows Server 2003 and to Windows Server 2003 R2. Simple.

With Windows Server 2008, it shipped with service pack 1 included, in line with it’s client operating system sibling – Windows Vista. When service pack 2 was released, it applied to both Windows Server 2008 and to Windows Vista. Still with me?

Today, one of my colleagues asked a question of me – what service pack level does Windows Server 2008 R2 sit at – SP1, SP2, or both (i.e. multiple versions). The answer is neither. Unlike Windows Server 2003 R2, which was kind of linked to Windows XP, but not really, Windows Server 2008 R2 and Windows 7 are very closely related. Windows Server 2008 R2 doesn’t actually display a service pack level in its system properties and I would expect the first service pack for Windows 7 to be equally applicable to Windows Server 2008 R2 (although I haven’t seen any information from Microsoft to confirm this). What’s not clear is whether the first service pack for Windows Server 2008 R2 and Windows 7 will also be service pack 3 for Windows Server 2008 and Vista? I suspect not and would expect Windows Server 2008 and Windows Server 2008 R2 to take divergent paths from a service pack perspective. Indeed, it could be argued that service packs are less relevant in these days of continuous updates. At the time of writing, the Windows service pack roadmap simply says that the “Next Update and Estimated Date of Availability” for Windows Server 2008 is “To be determined” and there is no mention of Windows 7 or Server 2008 R2.

So, three consecutive operating system releases with three different combinations of release naming and service pack application… not surprisingly resulting in a lot of confused people. For more information on the mess that is Microsoft approach to major releases, update releases, service packs and feature packs, check out the Windows Server product roadmap.

Installing the Ancestry.co.uk Enhanced Image Viewer application on Windows 7 (x64)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back, I started to investigate my family tree. Spurred on my a combination of recent personal events I switched the half-hearted attempt that I’d made at genesreunited.co.uk over to ancestry.co.uk and the 14 day trial was enough to convince me that it was a good tool for researching my family history.

Transferring my tree was easy enough – there is a de facto file format used by genealogists called GEDCOM and both sites supported it, but as I got stuck into researching the tree I found that I was having difficulty installing the Enhanced Image Viewer ActiveX control that Ancestry uses to display certain documents. To be fair to Ancestry, I run Windows 7 (not yet generally available) – but they only officially support IE7 (IE8 has been around for a while yet) and push people towards Firefox if they are having problems. Firefox is OK, but installing a new browser just to access one feature on a website is also a little extreme. I was sure there was a way… and eventually (with Ancestry’s help), I got there.

My problem was that (using 32-bit Internet Explorer) I could access a page that wanted to load the Enhanced Image Viewer and I could download and run the installer; however setup failed stating that:

Setup failed – contact customer support

Windows then detected a problem with the installation but, following advice on the Ancestry website I told it that the installation was successful and it allows me to continue. After returning to Ancestry, I was presented with a message stating that:

The Enhanced Image Viewer is not installed on this machine. For the best experience, please click here to download the Enhanced Image Viewer or click here to view this image using the Basic Image Viewer.

The Basic Image Viewer seems to work OK but the very existence of an “enhanced” viewer suggests that there is something there that I’m missing (and this is a subscription website after all)!

So, here’s what I tried that didn’t work:

  1. Enabling the ActiveX control using Internet Explorer to manage add-ins (it wasn’t there to enable).
  2. Manually downloading and installing the Enhanced Image viewer (failed to register).
  3. Manually uninstalling the Enhanced Viewer (it was not there as it never successfully installed).

In the end, I broke all good security practices by logging on as administrator (instead of running the installer as an administrator), and turning off UAC, after which the viewer installed as it should. Clearly this application was very badly developed (it seems not to follow any modern application development standards) but at least I got it installed!

One final word of warning – and this one is non-technical – researching your family tree can quickly become addictive (my wife refers to it as my latest “time Hoover”).

Connecting multiple ReadyNAS devices to a single UPS

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It seems to be ReadyNAS week at markwilson.it because that’s what I’ve spent the last couple of days working with but the ReadyNAS really is a stonking piece of hardware (think of it as a £150 Linux box with built-in X-RAID) and mine will soon be providing the storage for a Windows Home Server VM (yes, I know the ReadyNAS can do loads of the things that WHS can, but I work with Microsoft products and it’s about time I had a serious look at WHS).

Anyway, my ReadyNASes are running off an APC Smart-UPS 1500 but only one of them has the USB connection to monitor the UPS status. It turns out that’s not a problem as the latest versions of the ReadyNAS software (RAIDiator) allow one ReadyNAS to act as a UPS status server for the others.

I think this needs at least v4.1.5 of RAIDiator (my ReadyNAS “UPS client” shipped with v4.1.4 and I updated it to v4.1.6, meanwhile the “UPS server” is running v4.1.5) but there is an option on the Power page in FrontView (the ReadyNAS web administration console) to define hosts that are allowed to monitor the attached UPS (where a physical connection to the UPS exists).

ReadyNAS UPS server

Similarly, on a ReadyNAS that is not physically connected to a UPS, it is possible to specify the IP address of a ReadyNAS that is connected to the UPS.

ReadyNAS UPS client

With these settings enabled, both ReadyNAS devices can cleanly shutdown in the event of a power failure.

I wonder if my Windows Server 2008 host can also monitor the ReadyNAS and shut itself down in the event of power loss too…