I frequently connect to Windows hosts from my Mac and I have been using the Microsoft Remote Desktop Connection client for Mac OS X. The trouble with this is that it only allows a single connection and it’s not a universal binary (it also has a tendency to crash on exit, requiring a forced quit). I use rdesktop on my Linux boxes, and figured it ought to be available for the Mac (it is, using fink, or by compiling from source) but I also came across CoRD (via Lifehacker) and TSclientX (via the comments on the Lifehacker post) – both of which seem to offer a much richer user experience:
CoRD allows multiple RDP connections as well as storing login credentials. It seems pretty responsive too.
TSclientX s essentially a GUI wrapper for rdesktop and therefore requires X11. That shouldn’t really be a problem but it does sometimes feel like a bit of a kludge – even so, it has the potential to be extremely useful as it supports SeamlessRDP. Unfortunately, SeamlessRDP requires additional software to be present on the remote Windows system and I couldn’t get it to work for me, possibly because I was connecting to a Windows XP machine (which only supports a single connection) and rdesktop creates a X11 window for each window on the server side.
At the moment, I’ve settled on CoRD, largely due to its ease of use but both clients seem to offer a great improvement over Microsoft’s RDP offering for Mac users.
I’ve written previously about why open source software is not really free (as in monetary value), just free (as in freedom). Companies such as Red Hat and Novell (SUSE) make their money from support and during Red Hat Enterprise Linux (RHEL) setup, it is “strongly recommended” that the system is set up for software updates via Red Hat Network (RHN), citing the benefits of an RHEL subscription as:
“Security and updates: receive the latest software updates, including security updates, keeping [a] Red Hat Enterprise Linux system updated and secure.
Downloads and upgrades: download installation images for Red Hat Enterprise Linux releases, including new releases.
Support: Access to the technical support experts at Red Hat or Red Hat’s partners for help with any issues you might encounter with [a] system.
Compliance: Stay in compliance with your subscription agreement and manage subscriptions for systems connected to [an] account at http://rhn.redhat.com/
You will not be able to take advantage of these subscriptions privileges without connecting [a] system to Red Hat Network.”
In fairness to Red Hat, they sponsor the Fedora Project for users like me, who could probably make do with a community-supported release (Fedora is free for anyone to use modify and distribute) but there is another option – CentOS (the community enterprise operating system), which claims to be:
“An Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. CentOS conforms fully with the upstream vendor[‘]s redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) CentOS is free.”
I’ve found that using these, even if there is not an appropriate RHEL or generic RPM available, there is often a CentOS RPM (which often still carries the el5 identifier in the filename). These should be safe to install on an RHEL system and in those rare cases when a bleeding edge package is required, there may well be a Fedora version that can be used. So it seems that I can continue to run a Linux distribution that is recognised by most software vendors, even when my RHN subscription expires.
Last year, I wrote about installing VMware Server on Fedora Core 6. At the time, I was using version 1.0.1 (build 29996) and tonight I needed to load the latest version 1.0.3 (build 44356) on my laptop, which is now running Red Hat Enterprise Linux (RHEL) 5. In theory, installing on a popular enterprise distribution such as RHEL ought to be straightforward but even so there were some things to watch out for (some of which were present in my earlier post). The following steps should be enough to get VMware Server up and running:
Download the latest VMware Server release and register for a serial number (i.e. give VMware lots of marketing information… everything but my inside leg measurement… and make a mental note that I mustn’t lose the serial number this time).
Prepare the system, installing the following packages and dependencies:
libgomp = (v4.1.1-52.el5.i386).
Install VMware Server (rpm -Uvh VMware-server-1.0.3-44356.i386.rpm).
Configure VMware Server (/usr/bin/vmware-config.pl):
Display and accept the EULA, then accept defaults for installation of MIME type icons (/usr/share/icons), desktop menu entries (/usr/share/applications), application icon (/usr/share/pixmaps), allow the configuration to build the vmmon module (using /usr/bin/gcc), enable networking, enable NAT, probe for an unused private subnet, do not configure additional NAT subnets, enable host only subnets, robe for an unused private subnet, do not configure additional host-only subnets, port for connection (902) default location for virtual machine files (/var/lib/vmware/Virtual Machines, creating if necessary) and finally, provide the serial number when requested.
All the prompts should work at their defaults; however it may be necessary to answer the question â€œWhat is the location of the directory of C header files that match your running kernel? [/usr/src/linux/include]â€ with /usr/src/kernels/2.6.18-8.el5-i686/include (or another version of the kernel-devel tools).
Building the vmmon module will fail if gcc is not present.
If the installer is being run under X, the serial number can be pasted into the terminal when requested.
The configuration script will have to be re-run if it finds that inetd or xinetd are not installed
Extract the VMware management user interface from the archive (tar zxf VMware-mui-1.0.3-44356.tar.gz) and run the installation program (./vmware-mui-distrib/vmware-install.pl):
Display and accept the EULA, then accept defaults for installation of the binary files (/usr/bin), location of init directories (/etc/rc.d), location of init scripts (/etc/rc.d/init.d), installation location (/usr/lib/vmware-mui, creating if necessary), documentation location (/usr/lib/vmware-mui/doc, creating if necessary), allow vmware-install.pl to call /usr/bin/vmware-config-mui.pl and define the session timeout (default is 60 minutes).
Extract the VMware Server console package from the client archive, or download it from the VMware management interface at https://servername:8333/ (it may be necessary to open a firewall port for TCP 8333 using system-config-securitylevel in order to allow remote connections).
Install the VMware Server console (rpm -Uvh VMware-server-console-1.0.3-44356.i386.rpm).
Run the vmware-config-server-console.pl script (not vmware-config-console.pl as stated in the documentation) – accept the EULA and if prompted, enter the port number for connection (default is 902).
At this point, you should have a working VMware Server installation accessible via the VMware Server Console icon on the Applications | System Tools menu, by using the vmware command from a terminal, or via a browser session. The final stage is to set up some virtual machine. I simply copied my previous image from an external hard disk to /var/lib/vmware/Virtual Machines and then opened it in the console (from where I could update the VMware Tools) but the (Windows) VMware Converter utility is available for P2V/I2V/V2V migrations (replacing the VMware P2V Assistant) and preconfigured VMs can be obtained from the VMTN virtual appliance marketplace.
I’m not sure if it’s the gradual improvement in my Linux knowledge, better information on the ‘net, or just that integrating Windows and Unix systems is getting easier but I finally got one of my non-Windows systems to authenticate against Active Directory (AD) today. It may not sound like much of an achievement but I’m pretty pleased with myself.
Active Directory is Microsoft’s LDAP-compliant directory service, included with Windows server products since Windows 2000. The AD domain controller that I used for this experiment was running Windows Server 2003 with service pack 2 (although the domain is still in Windows 2000 mixed mode and the forest is at Windows 2000 functional level) and the client PC was running Red Hat Enterprise Linux (RHEL) 5.
The settings I used can be seen in the screen grab, specifying the Winbind domain (NetBIOS domain name), security model (ADS), Winbind ADS realm (DNS domain name), Winbind domain controller(s) and the template shell (for users with shell access), following which I selected the Join Domain button and supplied appropriate credentials and the machine was successfully joined the domain (an error was displayed in the terminal window indicating that Kerberos authentication failed – not surprising as it hadn’t been configured – but the message continued by reporting that it had fallen back to RPC communications and resulted in a successful join).
For reference, the equivalent manual process would have been something like:
Edit the name service switch file (/etc/nsswitch.conf) to include the following:
Once these configuration changes have been made, AD users should be able to authenticate, but they will not have home directories on the Linux box, resulting in a warning:
Your home directory is listed as:
but it does not appear to exist. Do you want to log in with the / (root) directory as your home directory? It is unlikely anything will work unless you use a failsafe session.
or just a simple:
No directory /home/DOMAINNAME/username!
Logging in with home = “/”.
This is easy to fix, as described in Red Hat knowledgebase article 5367, adding session required pam_mkhomedir.so skel=/etc/skel umask=0077 to /etc/pam.d/system-auth. After restarting the winbind service, the first subsequent login should be met with:
Creating directory ‘/home/DOMAINNAME/username‘
The parent directory must already exist; however some control can be exercised over the naming of the directory – I added template homedir = /home/%D/%U to the [global] section in /etc/samba/smb.conf (more details can be found in Red Hat knowledgebase article 4760).
At this point, AD users can log on (using DOMAINNAME\username at the login prompt) and have home directories dynamically created but (despite selecting the cache user information and local authorization is sufficient for local users options in system-config-authentication) if the computer is offline (e.g. a notebook computer away from the network), then login attempts will fail and the user is presented with the following warning:
Incorrect username or password. Letters must be typed in the correct case.
In order to allow offline working, I followed some advice relating to another Linux distribution (Mandriva disconnected authentication and authorisation) but it still worked for me on RHEL. All that was required was the addition of winbind offline logon = yes to the [global] section of /etc/samba/smb.conf along with some edits to the /etc/pam.d/system-auth file:
Append cached_login to auth sufficient pam_winbind.so use_first_pass.
These changes (along with another winbind service restart) allowed users to log in using cached credentials (once a successful online login had taken place), displaying the following message:
Logging on using cached account. Network ressources [sic] can be unavailable
Unfortunately, the change also prevented local users from authenticating (except root), with the following strange errors in /var/log/messages:
May 30 11:30:42 computername pam_winbind: request failed, but PAM error 0!
May 30 11:30:42 computername pam_winbind: internal module error (retval = 3, user = `username')
May 30 11:30:42 computername login: Error in service module
After a lot of googling, I found a forum thread at LinuxQuestions.org that pointed to account [default=bad success=ok user_unknown=ignore] pam_winbind.so as the culprit. After I removed this line from /etc/pam.d/system-auth (it had already been replaced with account sufficient pam_winbind.so use_first_pass cached_login), both AD and local users could successfully authenticate:
May 30 11:37:25 computername -- username: LOGIN ON tty1 BY username
I should add that this configuration is not perfect – Winbind seems to take a minute or so to work out that cached credentials should be used (sometimes resulting in failed login attempts before allowing a user to log in) and it also seems to take a long time to login when working offline, but nevertheless I can use my AD accounts on the Linux workstation and I can log in when I’m not connected to the network.
If anyone can offer any advice to improve this configuration (or knows how moving to a higher domain/forest functional level may affect it), please leave a comment below. If you wish to follow the full LDAP/Kerberos authentication route described in Scott Lowe’s article (linked earlier), it may be worth checking out Microsoft Services for Unix (now replaced by the Identity Management for Unix component in Windows Server 2003 R2) or the open source alternative, AD4Unix.
Phew! I’ve just read an e-mail from Red Hat informing me that I passed the Red Hat Certified Technician (RHCT) exam that I took this morning.
The confidentiality agreement that I had to sign makes it practically impossible for me to talk about my exam experience but Red Hat’s RHCT exam preparation guide gives the most important details and without giving away any of the specifics, I can confirm that it was one of the most challenging certification exams I’ve ever taken (which is good, because having passed actually means something).
Apart from living and breathing Linux for the last few days, my preparation consisted of attending an RH033 course last year (including the now-discontinued RH035 Windows conversion course – my own quick introduction to Linux for Windows administrators may be useful as a substitute) and spending this week on an RH133 course (which includes the RH202 practical exam); I also have some limited experience from running Linux on some of my own computers and I worked on various Unix systems at Uni in the early 1990s. In short, I’m a competent technician (as the certification title indicates) but not a Linux expert.
“Go away or I will replace you with a very small shell script”
[T-shirt slogan from an attendee at tonight’s Windows PowerShell for IT administrators event.]
I’m back in my hotel room having spent the evening at one of Microsoft UK’s TechNet events and this time the topic was Windows PowerShell for IT administrators. I’ve written previously about PowerShell (back when it was still a beta, codenamed Monad) but tonight’s event was presented by Richard Siddaway from Perot Systems, who is not only an experienced infrastructure architect but also leads the PowerShell UK user group and thinks that PowerShell is one of the best pieces of technology ever (maybe a touch OTT but it is pretty powerful).
The event was demo-heavy and I didn’t grab all of the example commands (Richard plans to publish them on the user group website this week) so this post concentrates on what PowerShell can (and can’t) do and I’ll link to some more examples later.
What is PowerShell?
According to Microsoft, PowerShell is the next generation shell for Windows that is:
As interactive and composable as BASH/KSH.
As programmable as Perl/Ruby.
As production-oriented as AS400 CL/VMS DCL.
In addition to the attributes described above, PowerShell is extensible with snapins, providers and scripts. The provider model allows easy access to data stores (e.g. registry, Active Directory, certificate store), just as if they were a file system.
Scripting is accomodated in various forms, including text (Microsoft’s interpretation of the traditional Unix scripting model), COM (WSH/VBScript-style scripting), Microsoft.NET or commands (PowerShell cmdlets, emitting Microsoft .NET-based objects). As for the types of data that PowerShell can manipulate – it’s extensive, including flat files (CSV, etc.), .NET objects, XML (cmdlets and .NET), WMI, ADSI, ADO/ADO.NET and SQL.
So, PowerShell is a scripting interface with a heavy interface on Microsoft.NET – are programming skills required?
Not really. As Richard described, just because you can use native .NET code doesn’t mean that you should; however the more that you know, the more you can do with PowerShell.
Basically, simple scripts will need some .NET functions such as [STRING] and [MATH] and advanced scripts can use any .NET object but cmdlets provide an excellent administrative and scripting experience and are easier to work with – writing .NET code can be thought of as a safety net for when something isn’t possible using another method, rather than as a first port of call.
How can I learn to use PowerShell?
PowerShell’s documentation includes a getting started guide, a user guide, a quick reference guide and help text. Microsoft Switzerland has also produced a short Windows PowerShell book that’s available for download free of charge, there are plenty of other books on the subject and a “young but keen” community of administrators exists who are discovering how PowerShell can be put to use; however it’s probably best to just get stuck in – practice some ad-hoc development:
Try things out in an interactive shell.
Stitch things together with utilities and put the results in a script file (then realise that the tools are unsuitable and restart the process).
Once happy with the basic concepts, generalise the code (e.g. parameterise it) and clean it up (make it production-quality).
Once tested, integrate the PowerShell scripts with the infrastructure to be managed and then share scripts with the community.
One more thing – remember that it’s better to have many small scripts that each do one thing well than to have a behomoth of a script that’s very inflexible.
Is there anything else I should know before getting started?
There are a few concepts that it’s worth getting to grips with before launching into PowerShell:
Cmdlets are a great way to get started with PowerShell. Based on a verb-noun naming, they each provide specific functionality (e.g. get-help and make the resulting code self-describing (hence suprisingly easy to read).
The pipeline (think Unix or MS-DOS) – allows the output of one instruction to be fed into the next using the | symbol; however, unlike Unix/MS-DOS, .NET objects are passed between instructions, not text.
There is a text-based help system (cf. man pages on Unix-derived operating systems).
PowerShell is not case-sensitive (although tab completion will sometimes capitalise cmdlets and parameters); however it’s worth understanding that whilst double quotes (" ") and single quotes (' ') can be used interchangably, variables enclosed in double-quotes are resolved to their value, whereas the single-quote variant is treated as a variable.
There are also some issues to be aware of:
The default installation will not run scripts (not even the user’s profile) and scripts need to be enabled with set-executionpolicy.
There is no file association with PowerShell (for security reasons), so scripts cannot be run automatically or via a simple double-click. Scripts do normally use the .ps1 extension and although PowerShell will recognise a command as a script without this, using the extension helps PowerShell to work out what type of instruction is being issued (i.e. a script).
There is no capacity for remoting (executing code on a remote system) but workarounds are possible using .NET and WMI.
The current working directroy is not on the path (as with Unix-derived operating systems), so scripts are launched with .\scriptname.ps1. Dot sourced scripts (e.g. . . \scriptname.ps1) run in the context of the shell (rather than in their own context).
Although PowerShell supports use of the entire Microsoft.NET framework, not all .NET assemblies are loaded – some may need to be specified within a script.
Are there any other tools that work with PowerShell?
Various ISVs are extending PowerShell. Many of the tools are currently available as trial versions although some are (or may become) commercial products. Examples include:
Many organisations have realised the value of blogging from a corporate marketing perspective but I’ve recently gained first hand experience of blogging as a social networking tool.Â In general, any relationships formed as a result of blogging activities are online (whilst other tools such as LinkedIn attempt to convert personal relationships into more complex social networks) but I keep bumping into people that actually read the stuff that I write here!
Earlier this month, over lunch at the UK highlights from the Microsoft Management Summit event, I realised that the chap sitting next to me had left a comment on this blog a few weeks back and we got talking (Hi Dan); then, tonight I was back at Microsoft for a TechNet event about Windows PowerShell, where another chap introduced himself and said that he reads my blog (Hi Mike).Â It’s happened before too – I work for a very large organisation and a couple of colleagues have commented that they knew me from my blog before they met me.
Now, just to keep my ego in check, I should remember that this blog’s readership is not enormous (although it has grown steadily since I started tracking the metrics) but bearing in mind that must of what I write is just my notes for later re-use, it’s really good when someone says “hello” and lets me know that they’ve found something I wrote to be useful.
Earlier this week, I added a contact form to the site and I still allow comments on posts (even if 95% of the comments are spam, I get some good feedback too).Â So, feel free to get in touch if you like what you see here.Â I can’t promise to writeÂ on aÂ particular subject as that’s not the way this blog works (I write about my technology experiences and they, by their very nature, are unplanned) but it’s good to know that sitting here in my hotel room writing something late at night is not a complete waste of time.
Moving back to the social engineering point for a moment; it’s worth pointing out that the blogroll on this site is (XFN is a simple way to represent human relationships using hyperlinks).
It’s not often that I come away from a Microsoft event as excited as I was after the recent Vista after hours session.
You see, we have a problem at home… our DVD player has stopped recognising discs. That shouldn’t really be a problem (DVD players are cheap enough to replace) but it’s a CD/DVD player, tuner and surround-sound amplifier and I don’t really want to have to replace the entire system because of one broken DVD drive. So I took it apart (thinking that Sony might use the same drives in their consumer electronic devices as in a normal PCs), only to find that the externally slim slot-loading drive is actually a huge beast with cogs and is actually nothing like anything I’ve ever seen before.
Faced with the prospect of a hefty repair bill, I began to think that this (combined with the fact that we never know what is on our video tapes) could be the excuse I need to install a media PC in the living room? Well, possibly, but there are some hurdles to overcome first.
It’s not that my wife is demanding – far from it in fact – but she wasn’t too keen on my “black loud cr@p” (my semi-decent hi-fi separates) when we first moved in together and the shiny silver box (the one that’s now broken) was the replacement… I just can’t see anything that isn’t similarly small and shiny being tolerated anywhere other than my den.
I even saw an article in the July 2006 edition of Personal Computer World magazine, which showed how to build a living room PC using old hi-fi separates for the case; however you need a pretty large case for anything that’s going to make use of full-size PC components. Then there’s the issue of the system software… I tried Media Portal a while back but found it a bit buggy; Myth TV is supposed to be pretty good but I believe it can also be difficult to set up properly; the Apple TV sounded good at first – except that it doesn’t have PVR capabilities and relies on many hacks to get it working the way I would like it and (crucially) lists a TV with HDMI or component video inputs as one of its prerequisites – I was beginning to think that the best answer for me may be a Mac Mini with a TV adapter hooked up to my aging, but rather good, Sony Trinitron TV.
Then, at the Vista After Hours event, I saw the latest version of Windows Media Center – Mac OS X includes Front Row but Media Center has some killer features… and I have two spare copies of Windows Vista Ultimate Edition (thank you Microsoft)! Why not install Vista on a Mac Mini, then plug in a USB TV tuner (maybe more than one) and use this as a DVD player, PVR and all round home entertainment system?
I’ve written previously about installing Windows Vista on my Mac but I never activated that installation and I later removed Boot Camp altogether as I found that I never actually bothered to boot into Windows. The latest Boot Camp beta (v1.2) includes Windows Vista support (including drivers for the remote control) so I thought I’d give it a try on my existing Mac Mini before (potentially) splashing out on another one for the living room.
After downloading and installing Boot Camp and running the Boot Camp Assistant to create a Windows driver CD, I moved on to partitioning the disk, only to be presented with the following error:
Backing up and restoring my system… sounds a bit risky to me.
After catching some sleep, I set about the installation of Windows Vista. I had a few issues with Boot Camp Assistant failing to recognise my DVD (either the one I created with the RTM files from Microsoft Connect, or a genuine DVD from Microsoft) – this was the message:
It turns out that Boot Camp Assistant wasn’t happy with me running as a standard user – once I switched to an Administrator account everything kicked into life and I soon had Vista installed after a very straightforward installation. Furthermore, Apple has done a lot of work on Windows driver support and items that didn’t work with my previous attempt (like the Apple Remote) are now supported by Boot Camp 1.2 and Windows Vista. Sadly, my external iSight camera does not seem to be supported (only the internal variants). It also seems that my Windows Experience Index base score has improved to 3.3 (it was 3.0 when I installed Vista as an upgrade from Windows XP with Boot Camp v1.1.2).
After this, it wasn’t long before I had Media Center up and running, connected to the TV in my office – although that’s where the disappointment started. The Apple Remotedoes work but it’s so simple that menu controls (Media Center and DVD menus) necessitate resorting to keyboard/mouse control – basically all that it can do is adjust the volume, skip forward/backwards, play and pause. What I needed was a Windows Media Remote (and so what if it has 44 buttons instead of six? The Apple remote is far more elegant but six buttons clearly isn’t enough!):
Also, after switching back to my monitor, the display had reverted to basic (2D) graphics and I needed to re-enable the Windows Aero theme. Clearly that’s a little cumbersome and would soon become a pain if I had to do it frequently; however in practice it’s likely that I’ll leave the computer connected to either the TV or the monitor – not both.
I also needed a TV receiver – I was able to pick up an inexpensive Freeview (DVB-T) USB adapter (Â£29.99 including postage) and a Windows Media Remote (Â£21.99). Although the Digital terrestrial TV signal in my house is weak, I was pretty sure that I’d be able to boost it, and anyway, having a portable Freeview device will always be handy. Windows Vista didn’t recognise the device natively but I downloaded the latest drivers and despite being unsigned, they installed without issue. Unfortunately, Windows Media Center still didn’t recognise my tuner but the problem turned out to be that I had plugged the device into the Apple keyboard (which I think is USB 1.1) and once I plugged it into on of the Mac’s own USB 2.0 ports then I was able to set up the TV functionality within Windows Media Center – no need to bother with the TV guide and tuning software supplied with the device (although it did take a while to download the TV program guide and to scan for channels).
My local TV transmitter is at Sandy Heath and, although I tried other transmitters too, using the supplied aerial I could only pick up channels in multiplex D. Even the cheap Â£9.99 Labgear aerial that sits on top of my TV could pick up those channels! Ideally, I’d use an externally-mounted roof aerial but that wasn’t an option and for Â£19.99 I picked up the highly-rated Telecam TCE2001 at and was able to pick up 53 channels in mutiplexes 1, 2, B, C and D (and that was without using the signal booster). By boosting the signal the scan picked up 70 channels, although not all of them were strong enough to view.
As for the Windows Media Remote, I found that it didn’t work with the built-in IR receiver (it needed to use the supplied, but rather bulky Microsoft receiver); however this is not as bad as it sounds – the Microsoft receiver has a long USB cable, meaning that it can be placed next to the TV (the logical place to point the remote at), rather than wherever the computer is.
So, with working drivers and a functioning remote control, Windows Media Center was happy enough to let me watch and record TV using it’s built in electronic programme guide…
The final piece of the puzzle was pre-recorded media in a variety of formats such as QuickTime movies and DivX. After transferring the files from an OS X hard drive to something that Windows could read, I decided to see what Windows Media Center could play. I’m still working out exactly which codecs I need – I tried various combinations of XviD/DivX/3ivX plus the AC3 filter and ffdshow – these seemed to enable most of my content; however I’m still experiencing difficulties with some movies that were originally encoded as AVI and then converted for QuickTime/iTunes on the Mac (using Apple QuickTime Pro) and also some unprotected AAC audio with JPEG stills in the video track – e.g. some of the podcasts that I listen to. Through all this codec troubleshooting, one tool that I found incredibly useful was GSpot.
[the original version of this post referred to a codec pack which I have been advised may contain illegal software. As it is not my intention to publicly condone the use of such software, and as I’m not convinced that it is required in order to make this solution work, I have removed it from this post and edited the corresponding comments.]
After going to all of this effort to get Media Center running on my Mac, was it worth it? Yes! The Windows Media Center (2007) interface is excellent, without a hint of the standard Windows interface (as is right for a consumer electronics device) and is simply (and intuitively) controlled using the remote control. It’s not perfect (very few interfaces are) but it is better than Front Row. If I do carry on using this to record TV though, I will need to provide more disk space. One feature that I particularly liked though, was how, even when working in other Windows applications, a discrete taskbar notification appeared, showing me that Media Center was recording something:
So, having tested this Media Center Mac concept on the Mac Mini that I use for my daily computing, I need to decide whether to donate it for family use in the living room and buy myself something new (MacBook Pro or Mac Pro are just too pricey to justify… but should I get a MacBook or another Mac Mini?) or just to pick up a second-hand Mac Mini for the family. The trouble is that second-hand Mac Minis cost almost as much as new ones. Still, at least I’ve proved the concept… I’ll have to see if this technology bundle passes the WAF test first!
One of my Windows Vista PCs has been refusing to download updates from Windows Update, reporting that:
Windows could not search for new updates
A bit of googling turned up various forum threads/blog posts about this article but most of them recommend stopping the Windows Update service, renaming/removing the %systemroot%\SoftwareDistribution folder, restarting the Windows Update service and attempting an update. That seems to work but Jeroen Jansen’s post on the subject included a very useful comment with this little gem:
“Actually you donâ€™t have to delete the entire SoftwareDistribution folder, just the folders inside it with update cache. This way you can keep the update history.”
I renamed each folder one at a time and it seems that it was WuRedir that was causing the error on my system (that is to say that after that folder was renamed, Windows Update ran successfully, even after restoring all of the other folders, therefore maintaining my history and other configuration).
I’m not sure if it was as a direct result, but I’m pretty sure Vista switched from using Windows Update to Microsoft Update at the same time.
Corporate IT departments want to produce customised Windows builds.Â These builds mustÂ be validÂ when deployed to client PCs (i.e. the product activation period must not have expired!)Â and, as the product activation timer is ticking awayÂ during the customisation process, there needs to be a method to “rearm”Â product activation.
OEMs want to ship pre-activated versions of the operating system (an arrangementÂ with which I’m sure Microsoft are happy to comply asÂ they need OEMs to preload their operating system and not an alternative, like, let’s say… Ubuntu Linux!), so MicrosoftÂ provides these so-called Royalty OEMs withÂ special product keys which require no further activation, under as scheme known as system-locked pre-installation (SLP) or OEM activation (OA) 2.0.
Anti-piracy measures like product activationÂ is that they are to hackersÂ like a red rag is to a bull.
The net result, it seems, is twoÂ methods to avoid product activation.Â The first method, can be used to simply delay productÂ activation, as described by Brian Livingston at Windows Secrets. It uses an operating system command (slmgr.vbs -rearm),Â to reset the grace period for product activation back to a full 30 days.Â Â The Windows Secrets article also describes a registry keyÂ (HKEY_LOCAL_MACHINE\ SOFTWARE\Microsoft\Windows NT\CurrentVersion\SL\SkipRearm) and claims that itÂ can be set to 00000001 before rearming, allowing the rearm to take place multiple times (this registry key is reset by the rearm command, which is also available by running rundll32 slc.dll,SLReArmWindows); however, Microsoft claims that the SkipRearm key is ineffective for the purpose of extending the grace period as it actually just stops sysprep /generate (another command used during the imaging process) from rearmingÂ activation (something which can only be done three times) and does not actually reset the grace period (this is confirmed in the Windows Vista Technical Library documentation).Â Â Regardless of that fact, the rearm process can still be run three times, giving up to 120 days of unactivated use (30 days, plus three more rearms, each oneÂ providing an additional 30 days).Â That sounds veryÂ useful for both product evaluation and for corporate deployments – thank you very much Microsoft.Â According to Gregg Keizer at Computer World/PC World Magazine, aÂ Microsoft spokesperson has even confirmed that it’s not evenÂ a violation of the EULA.Â That is good.
I couldn’t possibly confirm or deny whether or not that method works… but Microsoft’s reaction to the OEM BIOS hacksÂ would suggest that this is not a hoax.Â Microsoft’s Senior Product Manager forÂ Windows Genuine Advantage (WGA),Â Alex Kochis, describes the paradox method as:
“It is a pretty labor-intensive [sic] process and quite risky.”
(as I indicated above).Â Commenting on the vstaldr method, he said:
“While this method is easier to implement for the end user, it’s also easier to detect and respond to than a method that involves directly modifying the BIOS of the motherboard”
Before continuing to hint at how Microsoft may respond:
“We focus on hacks that pose threats to our customers, partners and products.Â It’s worth noting we also prioritize our responses, because not every attempt deserves the same level of response. Our goal isn’t to stop every ‘mad scientist’ that’s on a mission to hack Windows.Â Our first goal is to disrupt the business model of organized counterfeiters and protect users from becoming unknowing victims.Â Â This means focusing on responding to hacks that are scalable and can easily be commercialized, thereby making victims out of well-intentioned customers.”
Which I will paraphrase as “it may work today, but don’t count on it always being that way”.
Note that I’m not encouraging anybody to run an improperly licensed copy of Windows.Â That would be very, very naughty.Â I’m merely pointing out thatÂ measures like product activation (as for any form of DRM) are more of an inconvenience to genuine users than they are a countermeasure against software piracy.
This post is forÂ informational purposes only. Please support genuine software.