Remote Desktop alternative for Mac users

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I frequently connect to Windows hosts from my Mac and I have been using the Microsoft Remote Desktop Connection client for Mac OS X. The trouble with this is that it only allows a single connection and it’s not a universal binary (it also has a tendency to crash on exit, requiring a forced quit). I use rdesktop on my Linux boxes, and figured it ought to be available for the Mac (it is, using fink, or by compiling from source) but I also came across CoRD (via Lifehacker) and TSclientX (via the comments on the Lifehacker post) – both of which seem to offer a much richer user experience:

  • CoRD allows multiple RDP connections as well as storing login credentials. It seems pretty responsive too.
  • TSclientX s essentially a GUI wrapper for rdesktop and therefore requires X11. That shouldn’t really be a problem but it does sometimes feel like a bit of a kludge – even so, it has the potential to be extremely useful as it supports SeamlessRDP. Unfortunately, SeamlessRDP requires additional software to be present on the remote Windows system and I couldn’t get it to work for me, possibly because I was connecting to a Windows XP machine (which only supports a single connection) and rdesktop creates a X11 window for each window on the server side.

At the moment, I’ve settled on CoRD, largely due to its ease of use but both clients seem to offer a great improvement over Microsoft’s RDP offering for Mac users.

Running Red Hat Enterprise Linux without a subscription

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written previously about why open source software is not really free (as in monetary value), just free (as in freedom). Companies such as Red Hat and Novell (SUSE) make their money from support and during Red Hat Enterprise Linux (RHEL) setup, it is “strongly recommended” that the system is set up for software updates via Red Hat Network (RHN), citing the benefits of an RHEL subscription as:

  • “Security and updates: receive the latest software updates, including security updates, keeping [a] Red Hat Enterprise Linux system updated and secure.
  • Downloads and upgrades: download installation images for Red Hat Enterprise Linux releases, including new releases.
  • Support: Access to the technical support experts at Red Hat or Red Hat’s partners for help with any issues you might encounter with [a] system.
  • Compliance: Stay in compliance with your subscription agreement and manage subscriptions for systems connected to [an] account at http://rhn.redhat.com/

You will not be able to take advantage of these subscriptions privileges without connecting [a] system to Red Hat Network.”

Red Hat Enterprise Linux 5 installer

Take a look at Red Hat Enterprise Linux (RHEL) and you’ll see that it’s actually quite expensive – a standard subscription for a machine with up to 2 processor sockets including 1 year’s 12×5 telephone support, 1 year of web access and unlimited incidents is €773.19 [source: Red Hat Online Shop, Europe]. That is not something that I can afford and even though Red Hat gave me a copy of RHEL 5 as part of my recent training, it only includes a 30-day subscription. Now they have launched Red Hat Exchange – a new service whereby third party open source software solutions are purchased, delivered and supported via a single, standardized Red Hat subscription agreement with consolidated billing covering the complete application stack. It’s a great idea, but the pricing for some of the packages makes using proprietary alternatives seem quite competitive.

In fairness to Red Hat, they sponsor the Fedora Project for users like me, who could probably make do with a community-supported release (Fedora is free for anyone to use modify and distribute) but there is another option – CentOS (the community enterprise operating system), which claims to be:

“An Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. CentOS conforms fully with the upstream vendor[‘]s redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) CentOS is free.”

Hmm… so which North American Enterprise Linux vendor might that be then ;-)

So what about RHEL systems for which the subscription has expired? I’m not sure what the legal standpoint is but there is a way to receive updated software using an unregistered copy of RHEL. Firstly, configuring additional repositories like Dag Wieer’s RPMForgethere are even RPMs available to set up the correct repository! Then, there are the various RPM search sites on the ‘net, including:

I’ve found that using these, even if there is not an appropriate RHEL or generic RPM available, there is often a CentOS RPM (which often still carries the el5 identifier in the filename). These should be safe to install on an RHEL system and in those rare cases when a bleeding edge package is required, there may well be a Fedora version that can be used. So it seems that I can continue to run a Linux distribution that is recognised by most software vendors, even when my RHN subscription expires.

Installing VMware Server on Red Hat Enterprise Linux 5

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Red Hat logo VMware logoLast year, I wrote about installing VMware Server on Fedora Core 6. At the time, I was using version 1.0.1 (build 29996) and tonight I needed to load the latest version 1.0.3 (build 44356) on my laptop, which is now running Red Hat Enterprise Linux (RHEL) 5. In theory, installing on a popular enterprise distribution such as RHEL ought to be straightforward but even so there were some things to watch out for (some of which were present in my earlier post). The following steps should be enough to get VMware Server up and running:

  1. Download the latest VMware Server release and register for a serial number (i.e. give VMware lots of marketing information… everything but my inside leg measurement… and make a mental note that I mustn’t lose the serial number this time).
  2. Prepare the system, installing the following packages and dependencies:
    • gcc (v4.1.1-52.el5.i386)
      • glibc-devel (v2.5-12.i386):
        • glibc-headers (v2.5-12.i386).
      • libgomp = (v4.1.1-52.el5.i386).
    • kernel-devel (v2.6.18-8.el5.i686).
    • xinetd (v2.3.14-10.el5.i386).
  3. Install VMware Server (rpm -Uvh VMware-server-1.0.3-44356.i386.rpm).
  4. Configure VMware Server (/usr/bin/vmware-config.pl):
    • Display and accept the EULA, then accept defaults for installation of MIME type icons (/usr/share/icons), desktop menu entries (/usr/share/applications), application icon (/usr/share/pixmaps), allow the configuration to build the vmmon module (using /usr/bin/gcc), enable networking, enable NAT, probe for an unused private subnet, do not configure additional NAT subnets, enable host only subnets, robe for an unused private subnet, do not configure additional host-only subnets, port for connection (902) default location for virtual machine files (/var/lib/vmware/Virtual Machines, creating if necessary) and finally, provide the serial number when requested.
      • All the prompts should work at their defaults; however it may be necessary to answer the question “What is the location of the directory of C header files that match your running kernel? [/usr/src/linux/include]” with /usr/src/kernels/2.6.18-8.el5-i686/include (or another version of the kernel-devel tools).
      • Building the vmmon module will fail if gcc is not present.
      • If the installer is being run under X, the serial number can be pasted into the terminal when requested.
    • The configuration script will have to be re-run if it finds that inetd or xinetd are not installed
  5. Extract the VMware management user interface from the archive (tar zxf VMware-mui-1.0.3-44356.tar.gz) and run the installation program (./vmware-mui-distrib/vmware-install.pl):
    • Display and accept the EULA, then accept defaults for installation of the binary files (/usr/bin), location of init directories (/etc/rc.d), location of init scripts (/etc/rc.d/init.d), installation location (/usr/lib/vmware-mui, creating if necessary), documentation location (/usr/lib/vmware-mui/doc, creating if necessary), allow vmware-install.pl to call /usr/bin/vmware-config-mui.pl and define the session timeout (default is 60 minutes).
  6. Extract the VMware Server console package from the client archive, or download it from the VMware management interface at https://servername:8333/ (it may be necessary to open a firewall port for TCP 8333 using system-config-securitylevel in order to allow remote connections).
  7. Install the VMware Server console (rpm -Uvh VMware-server-console-1.0.3-44356.i386.rpm).
  8. Run the vmware-config-server-console.pl script (not vmware-config-console.pl as stated in the documentation) – accept the EULA and if prompted, enter the port number for connection (default is 902).

At this point, you should have a working VMware Server installation accessible via the VMware Server Console icon on the Applications | System Tools menu, by using the vmware command from a terminal, or via a browser session. The final stage is to set up some virtual machine. I simply copied my previous image from an external hard disk to /var/lib/vmware/Virtual Machines and then opened it in the console (from where I could update the VMware Tools) but the (Windows) VMware Converter utility is available for P2V/I2V/V2V migrations (replacing the VMware P2V Assistant) and preconfigured VMs can be obtained from the VMTN virtual appliance marketplace.

Using Active Directory to authenticate users on a Linux computer

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not sure if it’s the gradual improvement in my Linux knowledge, better information on the ‘net, or just that integrating Windows and Unix systems is getting easier but I finally got one of my non-Windows systems to authenticate against Active Directory (AD) today. It may not sound like much of an achievement but I’m pretty pleased with myself.

Active Directory is Microsoft’s LDAP-compliant directory service, included with Windows server products since Windows 2000. The AD domain controller that I used for this experiment was running Windows Server 2003 with service pack 2 (although the domain is still in Windows 2000 mixed mode and the forest is at Windows 2000 functional level) and the client PC was running Red Hat Enterprise Linux (RHEL) 5.

The first step is to configure the Linux box to use Active Directory. I ran this as part of the RHEL installation but it can also be configured manually, or using system-config-authentication. The best way to do this is using LDAP and Kerberos (as described by Scott Lowe) but Scott’s advice indicates that would require some AD schema changes to incorporate Unix user information; the method I used is based on Winbind and doesn’t seem to require any changes on the server as Winbind allows a Unix/Linux box to become a full member of a Windows NT/AD domain.

Winbind settingsThe settings I used can be seen in the screen grab, specifying the Winbind domain (NetBIOS domain name), security model (ADS), Winbind ADS realm (DNS domain name), Winbind domain controller(s) and the template shell (for users with shell access), following which Winbind join I selected the Join Domain button and supplied appropriate credentials and the machine was successfully joined the domain (an error was displayed in the terminal window indicating that Kerberos authentication failed – not surprising as it hadn’t been configured – but the message continued by reporting that it had fallen back to RPC communications and resulted in a successful join).

For reference, the equivalent manual process would have been something like:

  1. Edit the name service switch file (/etc/nsswitch.conf) to include the following:
  2. passwd: files winbind
    shadow: files winbind
    group: files winbind
    netgroup: files
    automount: files

  3. Edit the Samba configuration file (/etc/samba/smb.conf) to include the following configuration lines in the [global] section:
  4. workgroup = DOMAINNAME
    security = ads
    password server = domaincontroller.domainname.tld
    realm = DOMAINNAME.TLD
    idmap uid = 16777216-33554431
    idmap uid = 16777216-33554431
    template shell = /bin/bash
    winbind use default domain = false

  5. Edit the PAM authentication configuration (/etc/pam.d/system-auth) to append broken_shadow to account required pam_unix.so and to insert:
  6. auth sufficient pam_winbind.so use_first_pass
    account [default=bad success=ok user_unknown=ignore] pam_winbind.so
    password sufficient pam_winbind.so use_authtok

  7. Join the domain:
  8. /usr/bin/net join -w DOMAINNAME -S domaincontroller.domainname.tld -U username

  9. Restart the winbind and nscd services:
  10. service winbind restart
    service nscd restart

It’s also possible to achieve the same results using authconfig (as described by Bill Boswell).

Once these configuration changes have been made, AD users should be able to authenticate, but they will not have home directories on the Linux box, resulting in a warning:

Your home directory is listed as:

‘/home/DOMAINNAME/username

but it does not appear to exist. Do you want to log in with the / (root) directory as your home directory? It is unlikely anything will work unless you use a failsafe session.

or just a simple:

No directory /home/DOMAINNAME/username!

Logging in with home = “/”.

This is easy to fix, as described in Red Hat knowledgebase article 5367, adding session required pam_mkhomedir.so skel=/etc/skel umask=0077 to /etc/pam.d/system-auth. After restarting the winbind service, the first subsequent login should be met with:

Creating directory ‘/home/DOMAINNAME/username

The parent directory must already exist; however some control can be exercised over the naming of the directory – I added template homedir = /home/%D/%U to the [global] section in /etc/samba/smb.conf (more details can be found in Red Hat knowledgebase article 4760).

At this point, AD users can log on (using DOMAINNAME\username at the login prompt) and have home directories dynamically created but (despite selecting the cache user information and local authorization is sufficient for local users options in system-config-authentication) if the computer is offline (e.g. a notebook computer away from the network), then login attempts will fail and the user is presented with the following warning:

Incorrect username or password. Letters must be typed in the correct case.

or:

Login incorrect

In order to allow offline working, I followed some advice relating to another Linux distribution (Mandriva disconnected authentication and authorisation) but it still worked for me on RHEL. All that was required was the addition of winbind offline logon = yes to the [global] section of /etc/samba/smb.conf along with some edits to the /etc/pam.d/system-auth file:

  • Append cached_login to auth sufficient pam_winbind.so use_first_pass.
  • Add account sufficient pam_winbind.so use_first_pass cached_login.

These changes (along with another winbind service restart) allowed users to log in using cached credentials (once a successful online login had taken place), displaying the following message:

Logging on using cached account. Network ressources [sic] can be unavailable

Unfortunately, the change also prevented local users from authenticating (except root), with the following strange errors in /var/log/messages:

May 30 11:30:42 computername pam_winbind[3620]: request failed, but PAM error 0!
May 30 11:30:42 computername pam_winbind[3620]: internal module error (retval = 3, user = `username')
May 30 11:30:42 computername login[3620]: Error in service module

After a lot of googling, I found a forum thread at LinuxQuestions.org that pointed to account [default=bad success=ok user_unknown=ignore] pam_winbind.so as the culprit. After I removed this line from /etc/pam.d/system-auth (it had already been replaced with account sufficient pam_winbind.so use_first_pass cached_login), both AD and local users could successfully authenticate:

May 30 11:37:25 computername -- username[3651]: LOGIN ON tty1 BY username

I should add that this configuration is not perfect – Winbind seems to take a minute or so to work out that cached credentials should be used (sometimes resulting in failed login attempts before allowing a user to log in) and it also seems to take a long time to login when working offline, but nevertheless I can use my AD accounts on the Linux workstation and I can log in when I’m not connected to the network.

If anyone can offer any advice to improve this configuration (or knows how moving to a higher domain/forest functional level may affect it), please leave a comment below. If you wish to follow the full LDAP/Kerberos authentication route described in Scott Lowe’s article (linked earlier), it may be worth checking out Microsoft Services for Unix (now replaced by the Identity Management for Unix component in Windows Server 2003 R2) or the open source alternative, AD4Unix.

Passed the Red Hat Certified Technician exam

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Red Hat Certified TechnicianPhew! I’ve just read an e-mail from Red Hat informing me that I passed the Red Hat Certified Technician (RHCT) exam that I took this morning.

The confidentiality agreement that I had to sign makes it practically impossible for me to talk about my exam experience but Red Hat’s RHCT exam preparation guide gives the most important details and without giving away any of the specifics, I can confirm that it was one of the most challenging certification exams I’ve ever taken (which is good, because having passed actually means something).

Apart from living and breathing Linux for the last few days, my preparation consisted of attending an RH033 course last year (including the now-discontinued RH035 Windows conversion course – my own quick introduction to Linux for Windows administrators may be useful as a substitute) and spending this week on an RH133 course (which includes the RH202 practical exam); I also have some limited experience from running Linux on some of my own computers and I worked on various Unix systems at Uni in the early 1990s. In short, I’m a competent technician (as the certification title indicates) but not a Linux expert.

As for my next steps, the Novell and Microsoft Interop Ability partnership directly impacts upon my work, so I imagine that any further work I do with Linux will be related to Novell (SUSE) Enterprise Linux. Even so, RHCT is a well-respected qualification, which is why I wanted to gain that certification (especially after setting off down that path last year). It’s unlikely that I’ll gain the necessary experience to go forward to attempt Red Cat Certified Engineer (RHCE) or Red Hat Certified Architect (RHCA) status (at least not in my day job) but I may convert to Novell’s Certified Linux Professional (CLP)/Certified Linux Engineer (CLE) path at a later date. In the meantime, it’s about time that I updated my Microsoft credentials…

Windows PowerShell for IT administrators

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

“Go away or I will replace you with a very small shell script”

[T-shirt slogan from an attendee at tonight’s Windows PowerShell for IT administrators event.]

I’m back in my hotel room having spent the evening at one of Microsoft UK’s TechNet events and this time the topic was Windows PowerShell for IT administrators. I’ve written previously about PowerShell (back when it was still a beta, codenamed Monad) but tonight’s event was presented by Richard Siddaway from Perot Systems, who is not only an experienced infrastructure architect but also leads the PowerShell UK user group and thinks that PowerShell is one of the best pieces of technology ever (maybe a touch OTT but it is pretty powerful).

The event was demo-heavy and I didn’t grab all of the example commands (Richard plans to publish them on the user group website this week) so this post concentrates on what PowerShell can (and can’t) do and I’ll link to some more examples later.

What is PowerShell?
According to Microsoft, PowerShell is the next generation shell for Windows that is:

  • As interactive and composable as BASH/KSH.
  • As programmable as Perl/Ruby.
  • As production-oriented as AS400 CL/VMS DCL.

In addition to the attributes described above, PowerShell is extensible with snapins, providers and scripts. The provider model allows easy access to data stores (e.g. registry, Active Directory, certificate store), just as if they were a file system.

Scripting is accomodated in various forms, including text (Microsoft’s interpretation of the traditional Unix scripting model), COM (WSH/VBScript-style scripting), Microsoft.NET or commands (PowerShell cmdlets, emitting Microsoft .NET-based objects). As for the types of data that PowerShell can manipulate – it’s extensive, including flat files (CSV, etc.), .NET objects, XML (cmdlets and .NET), WMI, ADSI, ADO/ADO.NET and SQL.

So, PowerShell is a scripting interface with a heavy interface on Microsoft.NET – are programming skills required?
Not really. As Richard described, just because you can use native .NET code doesn’t mean that you should; however the more that you know, the more you can do with PowerShell.

Basically, simple scripts will need some .NET functions such as [STRING] and [MATH] and advanced scripts can use any .NET object but cmdlets provide an excellent administrative and scripting experience and are easier to work with – writing .NET code can be thought of as a safety net for when something isn’t possible using another method, rather than as a first port of call.

Where can I get PowerShell?
Although it a core element of the Windows Server System, providing automation and integration capabilities across the various technology platforms, PowerShell is a separate download for Windows XP (SP2)/Server 2003 (SP1 or later, including R2)/Vista and will be included within Windows Server 2008. Note that PowerShell is not supported on Windows 2000.

How can I learn to use PowerShell?
PowerShell’s documentation includes a getting started guide, a user guide, a quick reference guide and help text. Microsoft Switzerland has also produced a short Windows PowerShell book that’s available for download free of charge, there are plenty of other books on the subject and a “young but keen” community of administrators exists who are discovering how PowerShell can be put to use; however it’s probably best to just get stuck in – practice some ad-hoc development:

  • Try things out in an interactive shell.
  • Stitch things together with utilities and put the results in a script file (then realise that the tools are unsuitable and restart the process).
  • Once happy with the basic concepts, generalise the code (e.g. parameterise it) and clean it up (make it production-quality).
  • Once tested, integrate the PowerShell scripts with the infrastructure to be managed and then share scripts with the community.

One more thing – remember that it’s better to have many small scripts that each do one thing well than to have a behomoth of a script that’s very inflexible.

Is there anything else I should know before getting started?
There are a few concepts that it’s worth getting to grips with before launching into PowerShell:

  • Cmdlets are a great way to get started with PowerShell. Based on a verb-noun naming, they each provide specific functionality (e.g. get-help and make the resulting code self-describing (hence suprisingly easy to read).
  • The pipeline (think Unix or MS-DOS) – allows the output of one instruction to be fed into the next using the | symbol; however, unlike Unix/MS-DOS, .NET objects are passed between instructions, not text.
  • There is a text-based help system (cf. man pages on Unix-derived operating systems).
  • PowerShell is not case-sensitive (although tab completion will sometimes capitalise cmdlets and parameters); however it’s worth understanding that whilst double quotes (" ") and single quotes (' ') can be used interchangably, variables enclosed in double-quotes are resolved to their value, whereas the single-quote variant is treated as a variable.

There are also some issues to be aware of:

  • The default installation will not run scripts (not even the user’s profile) and scripts need to be enabled with set-executionpolicy.
  • There is no file association with PowerShell (for security reasons), so scripts cannot be run automatically or via a simple double-click. Scripts do normally use the .ps1 extension and although PowerShell will recognise a command as a script without this, using the extension helps PowerShell to work out what type of instruction is being issued (i.e. a script).
  • There is no capacity for remoting (executing code on a remote system) but workarounds are possible using .NET and WMI.
  • The current working directroy is not on the path (as with Unix-derived operating systems), so scripts are launched with .\scriptname.ps1. Dot sourced scripts (e.g. . . \scriptname.ps1) run in the context of the shell (rather than in their own context).
  • Although PowerShell supports use of the entire Microsoft.NET framework, not all .NET assemblies are loaded – some may need to be specified within a script.

Are there any other tools that work with PowerShell?
Various ISVs are extending PowerShell. Many of the tools are currently available as trial versions although some are (or may become) commercial products. Examples include:

Where can I find out more?
The following links provide more information about PowerShell:

Blogging as a social networking tool

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Many organisations have realised the value of blogging from a corporate marketing perspective but I’ve recently gained first hand experience of blogging as a social networking tool.  In general, any relationships formed as a result of blogging activities are online (whilst other tools such as LinkedIn attempt to convert personal relationships into more complex social networks) but I keep bumping into people that actually read the stuff that I write here!

Earlier this month, over lunch at the UK highlights from the Microsoft Management Summit event, I realised that the chap sitting next to me had left a comment on this blog a few weeks back and we got talking (Hi Dan); then, tonight I was back at Microsoft for a TechNet event about Windows PowerShell, where another chap introduced himself and said that he reads my blog (Hi Mike).  It’s happened before too – I work for a very large organisation and a couple of colleagues have commented that they knew me from my blog before they met me.

Now, just to keep my ego in check, I should remember that this blog’s readership is not enormous (although it has grown steadily since I started tracking the metrics) but bearing in mind that must of what I write is just my notes for later re-use, it’s really good when someone says “hello” and lets me know that they’ve found something I wrote to be useful.

Earlier this week, I added a contact form to the site and I still allow comments on posts (even if 95% of the comments are spam, I get some good feedback too).  So, feel free to get in touch if you like what you see here.  I can’t promise to write on a particular subject as that’s not the way this blog works (I write about my technology experiences and they, by their very nature, are unplanned) but it’s good to know that sitting here in my hotel room writing something late at night is not a complete waste of time.

Moving back to the social engineering point for a moment; it’s worth pointing out that the blogroll on this site is XFN friendly (XFN is a simple way to represent human relationships using hyperlinks).

Creating a Media Center Mac

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It’s not often that I come away from a Microsoft event as excited as I was after the recent Vista after hours session.

You see, we have a problem at home… our DVD player has stopped recognising discs. That shouldn’t really be a problem (DVD players are cheap enough to replace) but it’s a CD/DVD player, tuner and surround-sound amplifier and I don’t really want to have to replace the entire system because of one broken DVD drive. So I took it apart (thinking that Sony might use the same drives in their consumer electronic devices as in a normal PCs), only to find that the externally slim slot-loading drive is actually a huge beast with cogs and is actually nothing like anything I’ve ever seen before.

Faced with the prospect of a hefty repair bill, I began to think that this (combined with the fact that we never know what is on our video tapes) could be the excuse I need to install a media PC in the living room? Well, possibly, but there are some hurdles to overcome first.

I’ve been toying with a media PC for a while now but, however hard manufacturers try, pretty much none of them is likely to pass the wife approval factor (WAF) – not even the lovely machines produced by a French system builder called Invasion.

It’s not that my wife is demanding – far from it in fact – but she wasn’t too keen on my “black loud cr@p” (my semi-decent hi-fi separates) when we first moved in together and the shiny silver box (the one that’s now broken) was the replacement… I just can’t see anything that isn’t similarly small and shiny being tolerated anywhere other than my den.

I even saw an article in the July 2006 edition of Personal Computer World magazine, which showed how to build a living room PC using old hi-fi separates for the case; however you need a pretty large case for anything that’s going to make use of full-size PC components. Then there’s the issue of the system software… I tried Media Portal a while back but found it a bit buggy; Myth TV is supposed to be pretty good but I believe it can also be difficult to set up properly; the Apple TV sounded good at first – except that it doesn’t have PVR capabilities and relies on many hacks to get it working the way I would like it and (crucially) lists a TV with HDMI or component video inputs as one of its prerequisites – I was beginning to think that the best answer for me may be a Mac Mini with a TV adapter hooked up to my aging, but rather good, Sony Trinitron TV.

Then, at the Vista After Hours event, I saw the latest version of Windows Media Center – Mac OS X includes Front Row but Media Center has some killer features… and I have two spare copies of Windows Vista Ultimate Edition (thank you Microsoft)! Why not install Vista on a Mac Mini, then plug in a USB TV tuner (maybe more than one) and use this as a DVD player, PVR and all round home entertainment system?

I’ve written previously about installing Windows Vista on my Mac but I never activated that installation and I later removed Boot Camp altogether as I found that I never actually bothered to boot into Windows. The latest Boot Camp beta (v1.2) includes Windows Vista support (including drivers for the remote control) so I thought I’d give it a try on my existing Mac Mini before (potentially) splashing out on another one for the living room.

After downloading and installing Boot Camp and running the Boot Camp Assistant to create a Windows driver CD, I moved on to partitioning the disk, only to be presented with the following error:

The disk cannot be partitioned because some files cannot be moved.  Backup the disk and use Disk Utility to format as a single Mac OS Extended (Journaled) volume.  Restore your information to the disk and try using Boot Camp Assistant again.

Backing up and restoring my system… sounds a bit risky to me.

Then I found Garrett Murray’s post about how the problem is really caused by files over 4GB in size. That may have worked in Garrett’s case (FAT32 disks will not support files over 4GB) but despite using WhatSize to track down a DVD image that was taking a chunk of space on my disk, I couldn’t get past the message (even after various reboots, starting the system in single user mode to run AppleJack and even starting the system without any login items). In the end, I gave in and accepted that my system disk required defragmenting, setting about the lengthy process of backing up with Carbon Copy Cloner, booting from the backup disk, erasing the system disk and restoring my data. Thankfully this worked and left me with a defragmented system disk, which Boot Camp Assistant was able to divide into two partitions.

After catching some sleep, I set about the installation of Windows Vista. I had a few issues with Boot Camp Assistant failing to recognise my DVD (either the one I created with the RTM files from Microsoft Connect, or a genuine DVD from Microsoft) – this was the message:

The installer CD could not be found.  Insert your Windows CD and wait a few seconds for the disk to be recognized.

It turns out that Boot Camp Assistant wasn’t happy with me running as a standard user – once I switched to an Administrator account everything kicked into life and I soon had Vista installed after a very straightforward installation. Furthermore, Apple has done a lot of work on Windows driver support and items that didn’t work with my previous attempt (like the Apple Remote) are now supported by Boot Camp 1.2 and Windows Vista. Sadly, my external iSight camera does not seem to be supported (only the internal variants). It also seems that my Windows Experience Index base score has improved to 3.3 (it was 3.0 when I installed Vista as an upgrade from Windows XP with Boot Camp v1.1.2).

After this, it wasn’t long before I had Media Center up and running, connected to the TV in my office – although that’s where the disappointment started. The Apple Remote does work but it’s so simple that menu controls (Media Center and DVD menus) necessitate resorting to keyboard/mouse control – basically all that it can do is adjust the volume, skip forward/backwards, play and pause. What I needed was a Windows Media Remote (and so what if it has 44 buttons instead of six? The Apple remote is far more elegant but six buttons clearly isn’t enough!):

Apple remote control Windows Media Center remote control

(It’s a pity that I didn’t see the pictures of the prototype Windows Vista Media Center remotes first, or else I would have tried to get one of the alternative remotes from Philips).

Also, after switching back to my monitor, the display had reverted to basic (2D) graphics and I needed to re-enable the Windows Aero theme. Clearly that’s a little cumbersome and would soon become a pain if I had to do it frequently; however in practice it’s likely that I’ll leave the computer connected to either the TV or the monitor – not both.

I also needed a TV receiver – I was able to pick up an inexpensive Freeview (DVB-T) USB adapter (£29.99 including postage) and a Windows Media Remote (£21.99). Although the Digital terrestrial TV signal in my house is weak, I was pretty sure that I’d be able to boost it, and anyway, having a portable Freeview device will always be handy. Windows Vista didn’t recognise the device natively but I downloaded the latest drivers and despite being unsigned, they installed without issue. Unfortunately, Windows Media Center still didn’t recognise my tuner but the problem turned out to be that I had plugged the device into the Apple keyboard (which I think is USB 1.1) and once I plugged it into on of the Mac’s own USB 2.0 ports then I was able to set up the TV functionality within Windows Media Center – no need to bother with the TV guide and tuning software supplied with the device (although it did take a while to download the TV program guide and to scan for channels).

My local TV transmitter is at Sandy Heath and, although I tried other transmitters too, using the supplied aerial I could only pick up channels in multiplex D. Even the cheap £9.99 Labgear aerial that sits on top of my TV could pick up those channels! Ideally, I’d use an externally-mounted roof aerial but that wasn’t an option and for £19.99 I picked up the highly-rated Telecam TCE2001 at and was able to pick up 53 channels in mutiplexes 1, 2, B, C and D (and that was without using the signal booster). By boosting the signal the scan picked up 70 channels, although not all of them were strong enough to view.

As for the Windows Media Remote, I found that it didn’t work with the built-in IR receiver (it needed to use the supplied, but rather bulky Microsoft receiver); however this is not as bad as it sounds – the Microsoft receiver has a long USB cable, meaning that it can be placed next to the TV (the logical place to point the remote at), rather than wherever the computer is.

So, with working drivers and a functioning remote control, Windows Media Center was happy enough to let me watch and record TV using it’s built in electronic programme guide…

The final piece of the puzzle was pre-recorded media in a variety of formats such as QuickTime movies and DivX. After transferring the files from an OS X hard drive to something that Windows could read, I decided to see what Windows Media Center could play. I’m still working out exactly which codecs I need – I tried various combinations of XviD/DivX/3ivX plus the AC3 filter and ffdshow – these seemed to enable most of my content; however I’m still experiencing difficulties with some movies that were originally encoded as AVI and then converted for QuickTime/iTunes on the Mac (using Apple QuickTime Pro) and also some unprotected AAC audio with JPEG stills in the video track – e.g. some of the podcasts that I listen to. Through all this codec troubleshooting, one tool that I found incredibly useful was GSpot.

[the original version of this post referred to a codec pack which I have been advised may contain illegal software. As it is not my intention to publicly condone the use of such software, and as I’m not convinced that it is required in order to make this solution work, I have removed it from this post and edited the corresponding comments.]

After going to all of this effort to get Media Center running on my Mac, was it worth it? Yes! The Windows Media Center (2007) interface is excellent, without a hint of the standard Windows interface (as is right for a consumer electronics device) and is simply (and intuitively) controlled using the remote control. It’s not perfect (very few interfaces are) but it is better than Front Row. If I do carry on using this to record TV though, I will need to provide more disk space. One feature that I particularly liked though, was how, even when working in other Windows applications, a discrete taskbar notification appeared, showing me that Media Center was recording something:

Windows Media Center recording notification

So, having tested this Media Center Mac concept on the Mac Mini that I use for my daily computing, I need to decide whether to donate it for family use in the living room and buy myself something new (MacBook Pro or Mac Pro are just too pricey to justify… but should I get a MacBook or another Mac Mini?) or just to pick up a second-hand Mac Mini for the family. The trouble is that second-hand Mac Minis cost almost as much as new ones. Still, at least I’ve proved the concept… I’ll have to see if this technology bundle passes the WAF test first!

Windows Update error 80245003

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my Windows Vista PCs has been refusing to download updates from Windows Update, reporting that:

Windows could not search for new updates
Error(s) found:
Code 80245003

A bit of googling turned up various forum threads/blog posts about this article but most of them recommend stopping the Windows Update service, renaming/removing the %systemroot%\SoftwareDistribution folder, restarting the Windows Update service and attempting an update. That seems to work but Jeroen Jansen’s post on the subject included a very useful comment with this little gem:

“Actually you don’t have to delete the entire SoftwareDistribution folder, just the folders inside it with update cache. This way you can keep the update history.”

I renamed each folder one at a time and it seems that it was WuRedir that was causing the error on my system (that is to say that after that folder was renamed, Windows Update ran successfully, even after restoring all of the other folders, therefore maintaining my history and other configuration).

I’m not sure if it was as a direct result, but I’m pretty sure Vista switched from using Windows Update to Microsoft Update at the same time.

Two methods of avoiding Windows Vista product activation

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months back, I wrote about how Windows Vista product activation works for volume license customers.  Last night I was searching to find out what the grace period is before activation is required and I stumbled across some interesting articles. You see, it turns out that there are three main problems with product activation:

  • Corporate IT departments want to produce customised Windows builds.  These builds must be valid when deployed to client PCs (i.e. the product activation period must not have expired!) and, as the product activation timer is ticking away during the customisation process, there needs to be a method to “rearm” product activation.
  • OEMs want to ship pre-activated versions of the operating system (an arrangement with which I’m sure Microsoft are happy to comply as they need OEMs to preload their operating system and not an alternative, like, let’s say… Ubuntu Linux!), so Microsoft provides these so-called Royalty OEMs with special product keys which require no further activation, under as scheme known as system-locked pre-installation (SLP) or OEM activation (OA) 2.0.
  • Anti-piracy measures like product activation is that they are to hackers like a red rag is to a bull.

The net result, it seems, is two methods to avoid product activation.  The first method, can be used to simply delay product activation, as described by Brian Livingston at Windows Secrets. It uses an operating system command (slmgr.vbs -rearm), to reset the grace period for product activation back to a full 30 days.  The Windows Secrets article also describes a registry key (HKEY_LOCAL_MACHINE\ SOFTWARE\Microsoft\Windows NT\CurrentVersion\SL\SkipRearm) and claims that it can be set to 00000001 before rearming, allowing the rearm to take place multiple times (this registry key is reset by the rearm command, which is also available by running rundll32 slc.dll,SLReArmWindows); however, Microsoft claims that the SkipRearm key is ineffective for the purpose of extending the grace period as it actually just stops sysprep /generate (another command used during the imaging process) from rearming activation (something which can only be done three times) and does not actually reset the grace period (this is confirmed in the Windows Vista Technical Library documentation).  Regardless of that fact, the rearm process can still be run three times, giving up to 120 days of unactivated use (30 days, plus three more rearms, each one providing an additional 30 days). That sounds very useful for both product evaluation and for corporate deployments – thank you very much Microsoft.  According to Gregg Keizer at Computer World/PC World Magazine, a Microsoft spokesperson has even confirmed that it’s not even a violation of the EULA.  That is good.

So that’s the legal method; however some enterprising hackers have a second method, which avoids activation full stop.  Basically it tricks the operating system into thinking that its running on a certain OEM’s machine, before installing the relevant certificate and product key to activate that copy of Windows.  The early (paradox) version involved making hex edits to the BIOS (hmm… buy a copy of Windows or turn my PC into a doorstop, I know which I’ll choose) but the latest (vstaldr) version even has an installer for various OEMs, and if that doesn’t work then there is a list of product keys which can be installed and activated using two operating system commands:

slmgr.vbs -ipk productkey
slmgr.vbs -ato

I couldn’t possibly confirm or deny whether or not that method works… but Microsoft’s reaction to the OEM BIOS hacks would suggest that this is not a hoax.  Microsoft’s Senior Product Manager for Windows Genuine Advantage (WGA), Alex Kochis, describes the paradox method as:

“It is a pretty labor-intensive [sic] process and quite risky.”

(as I indicated above).  Commenting on the vstaldr method, he said:

“While this method is easier to implement for the end user, it’s also easier to detect and respond to than a method that involves directly modifying the BIOS of the motherboard”

Before continuing to hint at how Microsoft may respond:

“We focus on hacks that pose threats to our customers, partners and products.  It’s worth noting we also prioritize our responses, because not every attempt deserves the same level of response. Our goal isn’t to stop every ‘mad scientist’ that’s on a mission to hack Windows.  Our first goal is to disrupt the business model of organized counterfeiters and protect users from becoming unknowing victims.   This means focusing on responding to hacks that are scalable and can easily be commercialized, thereby making victims out of well-intentioned customers.”

Which I will paraphrase as “it may work today, but don’t count on it always being that way”.

Ask for genuine Microsoft softwareNote that I’m not encouraging anybody to run an improperly licensed copy of Windows.  That would be very, very naughty. I’m merely pointing out that measures like product activation (as for any form of DRM) are more of an inconvenience to genuine users than they are a countermeasure against software piracy.

Disclaimer

This post is for informational purposes only. Please support genuine software.