Utility to discover detailed information about PC hardware

I needed to know a little bit of very technical information about the hardware in one of my PCs today (whether or not the processor supported the SSE2 instruction set) and in the process I found a great utility that details just about everything I could ever need to know about a PC’s CPU, cache, mainboard, and memory – the utility is called CPUID CPU-Z is and it is freeware. It’s also worth checking out some of the other utilities on the CPUID website.

Trouble with torrents

A few weeks back, I wrote a post about my initial experiences of using a BitTorrent client for peer-to-peer file sharing, which, contrary to popular belief, is not only exclusively used for illegal content distribution. The trouble is, that I’ve found it difficult to get going with BitTorrent as a content distribution mechanism. My download speeds were very slow and certain clients seemed to prevent any of my computers from using my Internet connection. I tried a few clients (BitTorrent, Azureus and Xtorrent – all on the Mac, although BitTorrent and Azureus are available for other platforms too) and it looks as though I finally have a configuration that works.

My ISP does traffic shape to reduce the impact of P2P traffic but does not aggressively prohibit P2P traffic; having said that I moved away from the standard ports and picked something up high (above 10000) to avoid any issues.

Using BitTorrent, there was an element of doubt as to whether I had a working connection – peers seemed to be disconnecting me as though I was leeching (clearly I have to download something before I can make it available to upload) and I do use a NAT router (which can complicate things). One of the best features in the Azureus client is the traffic light system that it uses for various elements including share ratio and NAT traversal – I had a green light for NAT, but the share ratio was always red/amber so I struggled to get a decent download speed.

I heard on the Macbreak Weekly review of the Apple TV how Xtorrent is a very simple to use BitTorrent client for OS X and after trying it out I’m blown away with how simple it has been in comparison to the other clients that I tried. I’ve been trying to download a 1.3GB file for days now using Azureus and, because of the problem whereby it seems to hog my Internet connection (despite following advice for good connection parameters), I could only run it when I didn’t need to use the network (i.e. at night). Even so, after two nights, I had only managed to retrieve a measly 0.1% of the file. I’ve been using Xtorrent for a few hours now, have most of the file downloaded and have seen transfer speeds as high as 300Kbps (2.4Mbps), but there is a catch – after an hour, downloads are capped at 10Kbps (although the warning says 10kbps – i.e. 8 times slower) and search results are randomly disabled. I’ve also noticed (but cannot confirm) that connection speeds drop significantly if Xtorrent is not the active application.

The search is not such a big deal (all it seems to do is append “torrent” to the supplied query and submit that search to Google and Yahoo! using an embedded browser window) but the “all sales are final” notice at the bottom of the web page and the aggressive registration message when the application is running, combined with no mention of the free version’s limitations on the website do not exactly fill me with confidence about the product (the website encourages download of the $20 pro edition but does not mention that the standard version is limited – in reality there is a free, limited, trial and the full, registered, product is $20 with no “pro” features, it’s just an unlimited version). Xtorrent may well have been “hand-crafted with care” but it’s only just out of beta and there seem to be so many issues to consider with BitTorrent clients that I want to be 100% certain this is the right client for me before parting with cash.

As it happens, I probably will buy Xtorrent, as it seems incredibly simple to use, had no problem using the port I’d previously opened (and verified using Shields UP!) for Azureus, seems to work well, includes RSS integration, allows browsing of file content prior to download and lets me set bandwidth limits according to the time of day and how busy my computer is. All in all, the user experience is excellent – I just wish that the website was a little more upfront about the limitations of the free download.

Configuring wireless Ethernet with Fedora Core 5

Last year, I wrote a post about my efforts in configuring wireless Ethernet with SuSE Linux 10.0. I couldn’t maintain a connection (at the time I was using an IBM ThinkPad T40, a D-Link DWL-G630 PCMCIA card and a D-Link DWL-2000AP+ access point) but this week I decided to give it all another try, this time on a Fujitsu Siemens Lifebook S7010D, which was already running Fedora Core 5 and has a built-in Intel Centrino chipset (hence it should be more widely supported by Linux than the D-Link card was – avoiding the need to use NDISwrapper).

The good news is that the Lifebook’s wireless chipset does have Linux support in the form of native drivers. The bad news is that it’s still not as easy as it should be to get this working! Having said that, I went down so many blind alleys that I’m not really sure what I did in the end to get the drivers installed. Hopefully the jumble of notes below will provide one or two pointers for someone else.

Identifying the hardware

First of all, I needed to know what type of wireless hardware I had and a spot of googling quickly turned up Jean Tourrilhes Linux Wireless LAN Howto, which contains links to many resources but actually gives me the answer to my question – there are three main Intel PRO/Wireless chipsets – the 2100 is an IEEE 802.11b card, the 2200 adds IEEE 802.11g support and the 2915 supports IEEE 802.11a. The later cards also add support for increased security (WPA, etc.). I already knew that the card in my notebook supported 802.11g (pointing to an Intel PRO/Wireless 2200) but confirmed this with the lspci command, returning (in part):

01:0d.0 Network controller: Intel Corporation PRO/Wireless 2200BG (rev 05)

Downloading and installing drivers

After arming me with information about my computer hardware, Jean’s howto set me off in the direction of two open source projects – the IEEE 802.11 subsystem for Linux and the Intel PRO/Wireless 2200BG driver for Linux project. One slight problem for me is that the drivers contained on these two sites need to be compiled… and I’m a sort of namby-pamby-need-to-have-it-already-built-for-me Linux user (sorry, but I am). Time to hit my search engine of choice again, this time turning up tutorials for installing Fedora Core 5 on a Dell Latitude D610 (it seems I’m not alone in not being “a ‘compile from source’ guy”) and installing Fedora Core N on a Dell Latitude D600 (including Intel PRO/Wireless 2200BG support) as well as a comment on a Fedora Core 5 Tips and Tricks page that suggested the following process for installing the earlier Intel PRO/Wireless 2100 (for which the process should be similar except for the actual driver files):

Some have asked for step by steps to install the drivers for the Intel Centrino Pro Wireless 2100 chip set. Here is an easy way to get up and running and have a nice GUI in GNOME. This assumes you already have NetworkManager installed from the base Fedora Core 5 repository.

yum install NetworkManager NetworkManager-gnome

Update your system to the latest kernel. Be careful as this can break other kernel modules you have installed, so be sure you have the source/RPMS handy for any packages that may need to be recompiled/reinstalled.

yum update kernel

Add the ATrpms (atrpms.net) repositiory to yum.

wget http://ATrpms.net/RPM-GPG-KEY.atrpms
rpm --import RPM-GPG-KEY.atrpms

Next, install the drivers using yum. There are several dependacies [sic] that it will install as well.

yum install ipw2100

Next, enable NetworkManager to start on boot up.

chkconfig NetworkManager on
chkconfig NetworkManagerDispatcher on

Reboot your machine so the new kernel module is loaded.

init 6

Once you boot up and login to GNOME, you should see a new icon by the clock. This is very similar to the wireless manager one can find in a very popular commercial OS. (Names omitted to protect the innocent.)

This was all very well, until I got to the point of installing Intel/PRO 2200 drivers (using yum install ipw2200 in place of the yum install ipw2100 command in the quote above), which just flatly refused to find anything appropriate.

To further complicate things, in the process, I’d updated to the latest i686 kernel (2.6.20-1.2300.fc5 in place of 2.6.17-1.2157_FC5) and I could only find RPMs at ATrpms for:

  • IEEE802.11 networking stack and kernel modules.
  • Intel PRO/Wireless 2200 firmware.
  • Intel PRO/Wireless 2200 driver.

but crucially, not the kernel modules for the Intel PRO/Wireless 2200 (I’ve since found them listed in a section for RPMs that are still being tested) and rpm -Uvh ipw2200-1.2.0-45.1.fc5.at.i386.rpm returned:

error: Failed dependencies:
ipw2200-kmdl-1.2.0-45.1.fc5.at is needed by ipw2200-1.2.0-45.1.fc5.at.i386

Time to roll up my sleeves and compile some drivers… a task which I approached with some trepidation but with a lot of help from a LinuxQuestions.org thread about getting ipw2200 working with Fedora Core 4.

After downloading and extracting IEEE802.11 (ieee80211) v1.2.16 and Intel PRO/Wireless 2200 (ipw2200) v1.2.0 (with firmware v3.0), I ran ./remove-old twice – once from the the ieee80211-1.2.16 directory and again from ipw2200-1.2.0 (I had to run chmod +x remove-old first for ieee80211). Then, I ran make and make install for ieee80211 and again for ipw2200, although this produced a lot of errors and I’m not sure that it was successful. Only once I’d done that did I find that Fedora Core 5 includes ipw2200 v1.0.8 and all that is required is to install was the firmware (yum install ipw2200-firmware), which I had done earlier with rpm -Uvh ipw2200-firmware-3.0-9.at.noarch.rpm.

Not knowing what sort of state my system was in, I rebooted and hoped for the best. Fortunately, this mixture of installation methods had resulted in a working wireless network stack, as shown by the output from dmesg (only the relevant output is shown here):

ieee80211_crypt: registered algorithm ‘NULL’
ieee80211: 802.11 data/management/control stack, 1.2.16
ieee80211: Copyright (C) 2004-2005 Intel Corporation <jketreno@linux.intel.com>
ipw2200: Intel(R) PRO/Wireless 2200/2915 Network Driver, 1.2.0kmprq
ipw2200: Copyright(c) 2003-2006 Intel Corporation
ipw2200: Detected Intel PRO/Wireless 2200BG Network Connection

iwconfig eth1 showed that I had even connected to a network (completely by accident), although it was not mine (G604T_WIRELESS and BELKIN54G are popular free wifi providers in the town where I live)!

Warning: Driver for device eth1 has been compiled with version 21
of Wireless Extension, while this program supports up to version 19.
Some things may be broken…

eth1 IEEE 802.11g ESSID:”G604T_WIRELESS”
Mode:Managed Frequency:2.437 GHz Access Point: 00:0F:3D:BA:1F:B2
Bit Rate:54 Mb/s Tx-Power=20 dBm Sensitivity=8/0
Retry limit:7 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off
Link Quality=55/100 Signal level=-68 dBm Noise level=-88 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:14 Missed beacon:15

Connecting to my (secured network)

Once again, I found a guide on the ‘net (Durham University’s wireless Linux quick guide) which helped me enormously with configuring a connection to my (WPA) secured network. For some bizarre reason, NetworkManager (which should provide a GUI interface to connect to whatever networks are detected) refused to connect; however I managed to maintain a stable connection by configuring the wpa_supplicant configuration file (/etc/wpa_supplicant/wpa_supplicant.conf) to read:



Then, running wpa_supplicant -Dwext -ieth1 -c/etc/wpa_supplicant/wpa_supplicant.conf to connect to the network (after I’d resolved some issues in the configuration file – diagnosed using the -dd option for wpa_supplicant – discovering that the SSID is case sensitive).

After issuing the dhclient eth1 command to obtain an IP address (and verifying that one had indeed been obtained using ifconfig eth1), iwconfig eth1 returned:

Warning: Driver for device eth1 has been compiled with version 21
of Wireless Extension, while this program supports up to version 19.
Some things may be broken…

eth1 IEEE 802.11g ESSID:”Home”
Mode:Managed Frequency:2.437 GHz Access Point: 00:13:46:
Bit Rate:54 Mb/s Tx-Power=20 dBm Sensitivity=8/0
Retry limit:7 RTS thr:off Fragment thr:off
Encryption key:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Security mode:open
Power Management:off
Link Quality=100/100 Signal level=-19 dBm Noise level=-88 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:1

Start the wireless interface at boot time

In order to make eth1 active at boot time, it was necessary to run system-config-network and add the device to the common profile. At first I followed Bill Moss’ Fedora Core 2 network profiles article but then decided that it would be better to maintain a single profile, with both wired (eth0) and wireless (eth1) interfaces activated when the computer starts.

In order to start wpa_supplicant at boot time it was necessary to add the following commands to /etc/rc.local:

/sbin/ifdown eth1
/usr/sbin/wpa_supplicant -Dwext -ieth1 -c/etc/wpa_supplicant/wpa_supplicant.conf -Bw
sleep 5
/sbin/dhclient eth1

The main drawback with this approach is that the wireless radio is permanently active. Ideally, NetworkManager could be used with wpa_supplicant; however, for now the workaround is to use the Lifebook’s radio on/off switch.

Miscellaneous notes

One guide that I found suggested that the following commands were necessary in order to enable the wireless connection:

depmod -a
modprobe ieee80211
modprobe ipw2200

In practice, I haven’t found this to be necessary but this could be because Fedora Core 5 already included the appropriate configuration items by default.

iwlist, wpa_cli and wpa_gui are useful commands for examining connection properties. Other useful commands when troubleshooting are be lsmod | grep ieee80211 and lsmod | grep ipw2200.

Before the operating system would route packets across the wireless connection, I found it necessary to take down the wired connection (ifdown eth0).

Problems installing Windows Server 2003 SP2 using automatic updates

A few days back, I noticed that I was unable to access websites as I had lost connectivity to my DNS server. When I checked the console, I found that my server had failed to install an update… Windows Server 2003 SP2 as it happens, which Windows Software Update Services (WSUS) had pushed out via the automatic updates mechanism. I restarted the machine and waited as it restored the prior configuration, running the software update removal wizard (at least it recovered cleanly), then waited as WSUS tried to push SP2 again. As the repeat update ran interactively, I found the problem – for some reason, towards the end of the installation, the following message was displayed:

Administrative Privileges Required

You must be an administrator of this computer to run this utility.

Log off, and then log on using an administrator account.

(even though I was logged on as the domain Administrator). After clicking OK, the installation continued before reporting:

Automatic Updates

You have successfully updated your computer.

You must restart your computer for the updates to take effect.

I’ve not seen any issues since the restart and the service pack seems to have applied successfully but checking the system event log reveals two entries describing the original problem:

Event Type: Error
Event Source: Windows Update Agent
Event Category: Installation
Event ID: 20
Date: 16/03/2007
Time: 06:00:19
User: N/A
Installation Failure: Windows failed to install the following update with error 0x80242007: Windows Server 2003 Service Pack 2 (32-bit x86).

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
0000: 57 69 6e 33 32 48 52 65 Win32HRe
0008: 73 75 6c 74 3d 30 78 38 sult=0x8
0010: 30 32 34 32 30 30 37 20 0242007
0018: 55 70 64 61 74 65 49 44 UpdateID
0020: 3d 7b 39 34 35 32 35 30 ={945250
0028: 33 43 2d 30 35 42 45 2d 3C-05BE-
0030: 34 41 36 34 2d 39 39 45 4A64-99E
0038: 44 2d 39 41 35 39 46 37 D-9A59F7
0040: 44 36 35 33 39 38 7d 20 D65398}
0048: 52 65 76 69 73 69 6f 6e Revision
0050: 4e 75 6d 62 65 72 3d 31 Number=1
0058: 30 34 20 00 04 .

Event Type: Error
Event Source: NtServicePack
Event Category: None
Event ID: 4374
Date: 17/03/2007
Time: 15:16:38
User: N/A
Windows Server 2003 installation failed, leaving Windows Server 2003 partially updated.
The installation of the Service Pack did not complete, and a rollback to the pre-installation state has been initiated. A rollback is a two-step process. Step one is complete; to complete step two, click OK. To be reminded at next login to complete step two, click Cancel. After you complete the rollback, your system will reboot and you may retry the installation of the Service Pack.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

My other Windows servers seem to have updated without any problems; however this illustrates the risk of having WSUS automatically approve updates.

Showing hidden files in Mac OS X

I use hidden files (such as .htaccess) extensively on my website, so I needed to be sure that they were included with my local backup copy. Mac OS X doesn’t show hidden files by default (it all gets a bit messy otherwise – although they are visible in a Terminal shell); however I found a tip which details the commands to run in order to show hidden files in the Finder (this can be run using a standard user account):

defaults write com.apple.finder AppleShowAllFiles TRUE
killall Finder

To return to the default display, run:

defaults write com.apple.finder AppleShowAllFiles FALSE
killall Finder

I did find an application to display hidden files too but why bother if a couple of commands will do the trick? Even better, there is a workflow to show hidden files using Automator.

Woohoo! 8Mbps ADSL

Screenshot from router configuration showing 7552Kbps ADSL connectionPlusNet upgraded our ADSL connection to their “up to 8Mbps” service today. I’m told that it may take a few days to settle down (they advised us to reboot the router once a day for 10 days or so) but the initial connection was a whopping 7552Kbps (7.375Mbps). After the first reboot, this seems to have dropped slightly to a more typical 5.1Mbps but that’s still a great improvement over yesterday – and I live in the sticks where a few years ago we had to campaign to get our telephone exchange upgraded for a 512Kbps ADSL connection! It seems that Moore’s law applies to telecoms too (sort of… our costs haven’t dropped but we have seen a 10-fold increase in connection speeds over a 4 year period).

Call centres, offshoring, poor customer service and how to save some money

I can understand why outsourcing and, in today’s global economy, off-shoring is so attractive to companies. I just wish that companies would think things through a bit further than the initial impressions of reduced costs.

Because I work for a company whose core business is IT managed service (but never referred to as outsourcing!), I’ll steer clear of my feelings on that subject – suffice to say that clearly there is a conflict of interest there. This post is purely about my experiences as a consumer of outsourced and off-shored services.

Firstly, although I have no direct experience of this, I’m led to believe that the cost savings from moving services overseas are generally not as great as they may initially appear. The workers may be paid less, and the cost of office space in India (for example) may be lower than in Western Europe but in call centre situations there are telecommunications costs to consider, as well as the inevitable integration of the offshore systems and processes with the rest of the business. I frequently have cause to speak to an IT service desk in South Africa and am rarely happy with the outcome. Sure, the staff are friendly and English is well spoken but the telephone line quality is inevitably poor. I don’t know if it’s routed via IP or over low grade international lines but either way it is not good for conversation. Even for software development, there is an implicit need for integration with other products and teams based elsewhere in the world. Technologies such as VoIP and web conferencing services may help bridge the gaps but in my experience, occasional face-to-face meetings are required in order to cement a quality relationship (and good relations really help to get things done).

Whilst I can see that there are some benefits to be had from off-shoring, near-shoring is sometimes seen as a more practical alternative (certainly, one of my previous employers had a relationship with a company in Eastern Europe for offshore software development rather than further afield). From a business perspective, it’s often a lot easier to spend a day or two in another European city than to fly several timezones away with all the complications that entails (jet lag is really not conducive to efficiency – as I found on a business trip to New Jersey a few years back).

Getting back to call centres, I’m not always pleased with my bank (First Direct); however one of the main reasons I stick with them is that all of their call centres are UK-based. The staff may have a variety of regional dialects (who doesn’t?) but English is their first language – complete with an understanding of all the nuances and colloquialisms that are in daily use. Even so, a few months back, I wanted to know the telephone number for a branch office where the paying in machine had crashed half way through my transaction (my money had been taken but would it be credited to my account?). This branch actually belongs to First Direct’s parent company, HSBC, who only publish a central number for branch contact. And where do you think the call goes to speak to someone in the next major town? Yep. India. Who couldn’t put me through to anyone local that could give me an answer – all I could do was wait and see if the money appeared in my account (fortunately it did).

I’ve just come off the phone from my credit card company (Marks and Spencer Money) and the only thing that keeps my custom there is the shopping vouchers that appear in the post every few weeks. All I wanted was a replacement card. After negotiating the inevitable IVR system I finally spoke to someone who was unable to help me because his systems were updating (and could I please call back later). After I said that they should call me (and he said he worked in a paperless office so he couldn’t make a note of my details to call back), I threatened to close my account (I meant it) and he transferred me to a colleague whose system could issue me a replacement card. Result. Except that it shouldn’t be this difficult and I shouldn’t need to be so stroppy.

To be fair, this could have happened in a UK call centre too. John Lewis Financial Services is based in Birmingham and I gave up getting a satisfactory response from them after problems with their Partnership card. Ditto for Halifax Bank of Scotland, who I hope never to do business with again (although that’s becoming increasingly difficult as they are so large in the marketplace). Even Volkswagen Assistance were unable to renew the breakdown cover on my wife’s car this morning because of a system being unavailable and asked me to call back later, although they were prepared to take a note of my number and call me (if only everything in life was as reliable as a Volkswagen). My point is, that if the call quality is good and the person on the other end of the phone wants to go the extra mile to deliver great customer service then I’ll be happy to continue my relationship and if they don’t, then I won’t.

In another example, my wife received a letter from her bank to say that as her credit card was coming up for expiry and, as she hadn’t used it for a while, they would close the account if they didn’t hear from her. That’s fair enough – customers who don’t use their accounts are bad business – but when she called to speak to them about keeping the account open (an opportunity to regain some lost business) she found herself on hold for so long that she decided that she didn’t want the account anyway! Similarly, I found myself on hold in Tesco‘s call centre phone system for so long a few weeks back that the cost of my phone call was greater than the cost of the product that I needed a refund for in my online shopping!

To make matters worse, many call centres only publish national rate (0870) numbers (and other non-geographic numbers that are excluded from bundled/low-cost call deals) – in effect, they can actually make money from you whilst keeping you on the line so, if you want to reduce your phone bill when calling non-geographic numbers for call centres, check out say no to 0870 for a database of alternative numbers).

In my view, it’s the cost of lost business that companies need to consider when selecting their partners rather than basing decisions on cost reduction alone.

Right, enough of being Mr. Angry – it’s a beautiful sunny day – I’m going to leave my computers and phone behind and get out into the countryside!

Duplicate computer name prevents Active Directory domain logon

I came across an interesting problem a few nights back… I locked myself out of a Windows XP computer. Here’s how it happened, along with how I got back in.

First, I built a new Windows Server and inadvertently used the same name as an existing Windows XP computer. Then I joined the server to an Active Directory domain (from this point on, the machine that was originally using the computer name is unable to authenticate with the domain as its password will have been overwritten when the duplicate machine joined the domain).

I then turned on the Windows XP computer. Because this machine is a notebook PC and wasn’t connected to the network at the time, I logged in using cached credentials; however after installing a wireless network card and restarting the computer, I was presented with a message that indicated I could not log on to the domain. Unfortunately I didn’t make a note of the exact message at the time, but looking back, I can see the NetLogon event 3210 in the system event log, the description for which which tells me exactly the problem:

This computer could not authenticate with \\domaincontroller.domainname.tld, a Windows domain controller for domain domainname, and therefore this computer might deny logon requests. This inability to authenticate might be caused by another computer on the same network using the same name or the password for this computer account is not recognized. If this message appears again, contact your system administrator.

Realising my mistake, I logged on using a local account and tried to rejoin the domain. Except that I couldn’t, because, as per Microsoft’s advice, I had disabled the local administrator account when I joined the domain and all I had available to me were standard user accounts.

Luckily Daniel Petri has published an article with a workaround for when a Windows computer cannot log on to a Windows Server 2003 domain due to errors connecting to the domain. By removing the network cable and restarting, I could log on as a domain administrator using cached credentials. Then, I enabled the local administrator account and changed the computer name before moving the computer out of the domain and into a workgroup. I then rebooted (with the network cable connected), logged in using the re-enabled administrator account and rejoined the domain (with the new computer name), before disabling the administrator account again.


SSH addendum

Since my recent posts about using SSH to securely remote administer Mac OS X, Linux and Windows computers, a couple of extra points have come up that are probably worth noting:

  • To use the console interactively, it may be better to use PuTTY (putty) than PuTTY Link (plink). In seems that PuTTY Link is fine for setting up a tunnel and then connecting using VNC, RDP or another remote control method but I found that control codes were echoed to the console window when I connected to a Linux of Windows computer and the command line experience was generally better using PuTTY interactively. This is because (quoting from the PuTTY documentation) “The output sent by the server will be written straight to your command prompt window, which will most likely not interpret terminal control codes in the way the server expects it to […] Interactive connections like this are not the main point of Plink”.
  • For another method of generating SSH keys, an online SSH key generator is available (I haven’t tried it myself).