November 2008 MVUG meeting announced

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who attended the first Microsoft Virtualization User Group (MVUG) meeting in September will probably appreciate the quality of the event that Patrick Lownds and Matthew Millers put together with guest speakers from Microsoft (Justin Zarb, Matt McSpirit and James O’Neill) presenting on the various elements of the Microsoft Virtualization line-up (which reminds me… I must finish up that series of blog posts…).

The next event has just been announced for the evening of 10 November (at Microsoft in Reading) with presentations on Virtualization Solution Accelerators and System Center Data Protection Manager 2007 (i.e. backing up a virtualised environment) – register for the physical event – or catch the event virtually via LiveMeeting.

Timezone blindness

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

<rant>Daylight saving time is an outdated concept, a complete nuisance and should be abolished.</rant>

I’m in the UK and I have a call with a Microsoft Product Group in Redmond (WA) tonight at 12:00 PST. US Pacific time is 8 hours ahead of the UK, and we’re both on daylight savings and in the northern hemisphere… or so I thought (I’m still pretty sure about the northern hemisphere bit).

LiveMeeting tells me that the meeting has not started yet and to wait until the scheduled meeting time before trying again, so I checked the current time in the US and sure enough it’s only 11:00 on the west coast… then I checked the meeting request and saw that Google Calendar had picked up the time as UTC/GMT +7 (which is correct) but in the summer the UK time is not Greenwich Mean Time (GMT) but British Summer Time (BST) and somehow (possibly by Google Calendar, possibly by Microsoft Outlook, possibly by me), the iCalendar (.ics) file that Microsoft provided when I registered for the event had been mangled and my calendar only had a 7 hour time difference. Still, at least I was early not late…

In future, I’ll be making good use of the other link in the e-mail from Microsoft – the world clock timezone converter – which takes into account daylight saving time (DST) as well as the local time zone.

Lusting after the new aluminium MacBook

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I really like my Apple MacBook. It’s expensive (compared with other similarly specified PCs) but I really enjoy using it – whether I’m running Mac OS X or Windows. Even so, I’ve always fancied an aluminium Mac but the Mac Pro was too expensive, I didn’t like the keyboard on the MacBook Pro and I still think the MacBook Air is little more than a toy.

New Aluminium MacBook - image used courtesy of Apple.A few hours ago, Apple announced the MacBook that I’ve been waiting for. The only problem is that with a 9-month-old MacBook White, there is no way I can justify the upgrade (even if I did have any change left in the piggy bank…)

I guess it will have to join that Nikon D3 DSLR on my wishlist! Talk about a “first-world problem”.

Windows 7 – or is it 6.1?

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s a lot of speculation about Windows 7 right now but specifics are a bit thin on the ground. Aside from the Engineering Windows 7 blog (which is information rich but makes some of my blog posts look short), as I said last year, pretty much the best place to watch right now is Paul Thurrott’s Windows 7 FAQ (Michael Pietroforte has an synopsis for IT administrators). Come PDC and WinHEC and the ‘net will be awash with Windows 7 news as that’s when developers and journos will finally get their grubby mits on a pre-release version of Microsoft’s latest operating system.

There has been some official news this week though. Yesterday, Mike Nash, Corporate Vice President for Windows Product Management at Microsoft, announced on the Windows Vista blog that the Windows 7 codename will not just be the codename but will also be the actual name for the next version of Windows (I understand that relates to the Windows client operating system and that Windows Server 2008 R2 will be the name for the server release):

“Over the years, we have taken different approaches to naming Windows. We’ve used version numbers like Windows 3.11, or dates like Windows 98, or ‘aspirational’ monikers like Windows XP or Windows Vista.”

OK, I get it. And Windows 7.0 would make sense for a major update (as Mike explained today in a follow-up post, but I’ll provide a few more details here):

  • Windows 1.0 and 2.0 (Windows 286) existed as products but were not widely adopted.
  • Windows 3.0, 3.1 (codename Janus), Windows for Workgroups 3.1 (codename Kato) and 3.11 (codename Snowball) were the first widely adopted versions.
  • At that time the OS forked and Windows NT (New Technology) was born at v3.1, then 3.5 (codename Daytona), 3.51 (all minor release updates).
  • The original Windows (not NT) 4.0 (codename Chicago/Detroit/Knoxville/Nashville) was called Windows 95 (and there were several variations of this operating system).
  • Windows NT 4.0 (codename Cairo) was the first major update for Windows NT.
  • Windows 98 (codename Memphis) and ME (Millennium Edition) were minor updates from Windows 95 (still 4.x) and then someone saw sense and closed down that product line, merging the codebase back into NT.
  • Windows NT 5.0 was marketed as Windows 2000 (a major update).
  • Windows NT 5.1 (codename Whistler) was marketed as Windows XP (a minor update).
  • Windows NT 5.2 was marketed as Windows Server 2003 (codename Whistler Server) and Windows Server 2003 R2.
  • Windows NT 6.0 (a major release) was marketed as Windows Vista (codename Longhorn) and Windows Server 2008 (codename Longhorn Server).

(See Bitzenbytes for more details of Windows development that I chose to skip over here.)

So far, this all makes sense (at least to me)… but then Mike Nash announced that:

“We decided to ship the Windows 7 code as Windows 6.1 – which is what you will see in the actual version of the product in cmd.exe or computer properties.”

So, Windows 7 (codename Blackcomb/Vienna/7) will not be v7.0 (indicating a major release) but will actually be 6.1 (i.e. a minor release). Based on recent history that really ought to fit with a Windows Vista R2 (marketing disaster waiting to happen), Windows Server 2008 R2 or Windows 2010 name. Nash continues by highlighting that:

“Windows 7 is a significant and evolutionary advancement of the client operating system. It is in every way a major effort in design, engineering and innovation. The only thing to read into the code versioning is that we are absolutely committed to making sure application compatibility is optimized for our customers.”

So, Windows 7 will be more like the move from Windows 2000 to Windows XP/2003, a significant step forward but still not a major update (unlike NT 4.0 to Windows 2000, or XP/2003 to Vista/2008). That’s good – especially for corporate IT departments struggling with Vista application compatibility (mostly through their own lack of foresight it should be noted). I understand why it’s numbered 6.1 internally but why confuse the issue by calling it 7 for marketing purposes?

I have a feeling that Windows 7 will not, despite yesterday’s announcement, be the final product name.

Microsoft Virtualization: part 5 (presentation virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on Microsoft Virtualization technologies, I’ll move onto what Microsoft refers to as presentation virtualisation (and everyone else calls terminal services, or server based computing).

Like host virtualisation, Terminal Services is not a new technology and Microsoft has provided basic Terminal Server capabilities within Windows Server for many years, with Citrix providing the enterprise functionality for those who need it. With Windows Server 2008, Microsoft has taken a step forward, introducing new Terminal Services functionality – with new features including:

  • Terminal Services Web Access – providing a web portal for access to RemoteApps – applications which run on the terminal server but have the look and feel of a local application (albeit subject to the limitations of the RDP connection – this is probably not the best way to deploy graphics-intensive applications). Whilst this is a great feature, it is somewhat let down by the fact that the Web Access portal is not customisable and that all users see all RemoteApps (although permissions are applied to control the execution of RemoteApps). For web access to RemoteApps, v6.1 of the Remote Desktop Connection (RDP) client is required but for v6.0 clients an MSI may be created using RemoteApp Manager (which may be deployed using Active Directory group policy).
  • Terminal Services Gateway – provides a seamless connection to Terminal Services (over HTTPS) without need for a VPN. It’s not intended to replace the need for a firewall (e.g. ISA Server) but it does mean that only one port needs to be opened (443) and may be an appropriate solution when a local copy of the data is not required or when bandwidth/application characteristics make the VPN experience poor.
  • Terminal Services Session Broker – a new role to provide load balancing and which enables a user to reconnect to an existing session in a load-balanced terminal server farm.

There are improvements on the client end too – for details of the client enhancements in Remote Desktop Connection (v6.1), provided with Windows XP SP3, Vista SP1 and Server 2008 see Microsoft knowledge base article 951616.

One of the more signicificant improvements in RDP 6.1 (but which requires Windows Server 2008 Terminal Services Printing) is Terminal Services EasyPrint. Whereas printing is traditionally problematic in a server-based computing environment (matching drivers, etc.) – Terminal Services EasyPrint presents a local print dialog and prints to the local printer – no print drivers are required on the server and there is complete transparency if a 32-bit client is used with a 64-bit server. If the application understands XPS (i.e. it uses the Windows Presentation Framework) then it prints XPS using the EasyPrint XPS Driver (which creates an XPS spool file). Otherwise there is a GDI to XPS conversion module (e.g. for Win32 applications). On the client side, the spool file is received over RDP using the Remote Desktop Connection with an EasyPrint plugin to spool the XPS through an XPS printer driver (converted by print processor if required). If the print device does not support XPS, the print job is converted to EMF by the Microsoft.NET Framework and printed using a GDI printer driver.

Terminal Services EasyPrint

Whilst Microsoft’s presentation virtualisation offerings may not be as fully-featured as those from other vendors, most notably Citrix, they are included within the Windows Server 2008 operating system and offer a lot of additional functionality when compared with previous Windows Server releases.

In the next post in this series, I’ll look at how the four strands of Microsoft Virtualization (host/server, desktop, application and presentation) are encapsulated within an overall management framework using System Center products.

Microsoft Virtualization: part 4 (application virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m getting behind on my blogging (my day job keeps getting in the way) but this post continues the series I started on Microsoft’s virtualisation technologies. So far, I’ve set the scene, looked at host/server virtualisation and desktop virtualisation and this time it’s Microsoft Application Virtualization – formerly known as SoftGrid and also known as App-V.

Microsoft provides a technical overview of App-V but the basic premise is that applications are isolated from one another whilst running on the same operating system. In fact, with App-V, the applications are not even installed but are sequenced into a virtual environment by monitoring file and registry changes made by the application and wrapping these up as a single file which is streamed to users on demand (or loaded from a local cache) to execute in its own “bubble” (technically known as a SystemGuard environment). Whilst not all applications are suitable for virtualisation (e.g. those that run at system level, or require specialist hardware such as a “dongle”) many are and one significant advantage is that the virtualised applications can also be run in a terminal services environment (without needing separate packages for desktop and server-based computing). It’s worth considering though, that virtualising an application doesn’t change the license – so, whilst it may be possible to run two versions of an application side by side, it may not be allowed under the terms of the end user license agreement (e.g. Internet Explorer).

I wrote a post about application virtualisation using Softricity SoftGrid a couple of years ago but, with App-V v4.5, Microsoft has made a number of significant changes. The main investment areas have related to allowing virtualised applications to communicate (through a new feature called dynamic suite composition), extending scalability, globalisation/localisation and security.

Of the many improvements in App-V v4.5, arguably the main feature is the new dynamic suite composition functionality. Using dynamic suite composition, the administrator can group applications so that shared components are re-used, reducing the package size and allowing plugins and middleware to be sequenced separately from the applications that will use them. This is controlled through definition of dependencies (mandatory or optional) so that two SystemGuard environments (App-V “bubbles”) can share the same virtual environment.

On the scalability front, App-V 4.5 also takes a step forward, as it provides three delivery options to strike a balance between enterprise deployment in a distributed environment and retaining the benefits of application isolation and on-demand delivery. The three delivery options are:

  • Full infrastructure – with a desktop publishing service, dynamic delivery and active package upgrades but requiring the use of Active Directory and SQL Server.
  • Lightweight infrastructure – still allowing dynamic delivery and active package upgrades but without the need for SQL Server, allowing application streaming capability to be added to Microsoft System Center Configuration Manager or third party enterprise software delivery frameworks.
  • Standalone mode – with no server infrastructure required and MSI packages as the configuration control, then mode allows standalone execution of virtual applications and is also interoperable with Microsoft System Center Configuration Manager or third party enterprise software delivery applications but it does not allow dynamic delivery or active package upgrades.

Additional scalability enhancements include background streaming (auto-load at login or at first launch for quick launch and offline availability) and the configuration of application source roots (for a local client to determine the appropriate server to use) as well as client support for Windows Server 2008 Terminal Services (in Microsoft Application Virtualization for Terminal Services). There are also new options for resource targeting for the application, open software description (OSD) and icon files, enhanced data metering (a WMI provider to collect application usage information) and better integration with the Windows platform (Microsoft Update and volume shadow copy service support, a System Center Operations Manager (SCOM) 2007 management pack, group policy template support, a best practice analyser and improved diagnostic support. Finally on the scalability front, the sequencer has been enhanced with a streamlined process (fewer wizards and less clicks), MSI creation capability (for standalone use), improvements at the command line and differential SFT file support for updates.

App-V is not the only application virtualisation technology (notable alternatives include VMware ThinApp – formerly Thinstall and Symantec/Altiris SVS) but it is one of the best-known. It’s also an important component of the Microsoft Virtualization strategy. In the next post in this series, I’ll take a look at presentation virtualisation.

Finally, it’s worth noting that I’m not an application virtualisation expert – but Aaron Parker is – if you’re interested in this topic then it’s worth adding Aaron’s blog to your feed reader.

Netgear ReadyNAS: low-cost RAID storage for the consumer

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months back I was looking into how to solve my home data storage issue (huge photo collection, huge iTunes library, increasing use of digital file storage, big disaster waiting to happen) and I thought about buying a Drobo. At least, I did until my friend Garry Martin said something to the effect of “that looks expensive for what it is… what about a Windows Home Server?”

Although I was initially a fan of WHS, meeting some of the guys who produce it last November left me with more uncertainty than confidence – and what confidence remained was shattered when I realised that they had managed to take a perfectly stable Windows Server with NTFS and produce a data corruption issue when accessing files directly across the network (the issue may be obscure – and it’s been patched now – but the fact that it was produced by messing around with an otherwise stable file system to allow it to do things that it shouldn’t be able to makes it no less alarming).

Whilst the Drobo is undoubtedly a really neat solution, it’s also more than I need and my real requirements are: RAID; at a low price point; preferably with a decent (Gigabit Ethernet) network connection (the Drobo is just storage – for network attachment an additional DroboShare device is required); running independently of my server (i.e. an appliance); solidly built; looks good on the desk. What I found at BroadbandBuyer.co.uk was a Netgear ReadyNAS Duo – basically a 2-disk RAID 1, or RAID-X NAS box (Netgear bought Infrant Technologies last year and it’s actually their technology, rebadged as a Netgear device). Whilst the RND2150 I bought had just a single 500GB (Seagate Barracuda) disk, it was less expensive for me to reallocate that disk elsewhere and to buy two more 1TB Barracuda 7200.11 disks (ST31000340AS) than to buy a larger ReadyNAS (go figure), but the ReadyNAS was about £230 (with a mail-in offer for an iPod Shuffle – received just a few days later), the disks were about £90 each (or they were last week – now they’re down to £78 as larger disks start to come on stream), and the 500GB will match the others in my server if I want to add some internal RAID there sometime. At just under £400 all-in it wasn’t cheap but the TrustedReviews and Practically Networked writeups were positive and I decided to go for it (there’s also a “definitive guide” to the ReadyNAS Duo on the ReadyNAS community site, which is great for a rundown of the features but is probably not a particularly objective review).

Netgear ReadyNAS DuoOnce I got the ReadyNAS home, I realised how solidly built it is, and how much value it includes. In addition to all the usual file protocols (CIFS, NFS, AFP, FTP, HTTP(S) and rsync), the ReadyNAS has a variety of additional server functionality (streaming media and discovery services, BitTorrent client, photo-sharing, etc. – which can be extended by accessing it directly as a Linux box), a thriving community and excellent Mac support (even providing a Widget for MacOS X to monitor the box). In fact, the only downside I’ve found so far is the lack of Active Directory support in the low-end ReadyNAS Duo (higher-specification devices can join a domain and one version of firmware I had on my RND2150 let me do so, but promptly left the web management interface inaccessible, resulting in the need for me to back up the data, perform a factory reset, and then copy the data back on again).

For small and medium businesses, there are higher-end ReadyNAS devices with more drive space and additional functionality but the ReadyNAS Duo is the one with the low price point.

Having expanded my ReadyNAS to 2x1TB disks (I was initially sceptical of the expansion process but, having done it, now I’m pretty impressed and will write a separate post on the subject), my new storage regime will use the ReadyNAS for all onsite storage, periodically backed up to separate USB disk for offsite storage. In addition, I’ll continue to back up my entire MacBook hard disk to a Western Digital Passport drive (which I can use to boot the system if when the primary disk goes belly-up), with an additional copy of the iTunes Library and photos on the ReadyNAS and Mozy to provide backup in the cloud for work in progress (at least until Live Mesh has a Mac client and increased storage capabilities). In the meantime, my server will continue to primarily be used for virtual machines and any essential data from the VMs will be copied to the ReadyNAS.

For many people, a single disk backup (e.g. USB hard disk) may suffice (even if it does represent a risk in terms of disaster recovery) and I’ll admit that this solution is not for everyone – but, for anyone with a lot of data hanging around at home and who doesn’t want the hassle of maintaining a full Windows or Linux server, the ReadyNAS appliance is worth considering, with expandable RAID providing expansion capabilities as well as peace of mind.

Why the UK’s National Rail website is an IT disaster

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In a few weeks’ time, my wife is taking the kids to her parents’ house by the seaside for a week. I’ve got the week off work too but I’ve got a huge list of outstanding jobs to do at home, so I’m only spending part of the week with them. It’s daft to take two cars (especially as we will be travelling together in one direction), so I thought I’d try public transport…

Problem number one is that I live in a rural area so public transport is not exactly plentiful – even though I’m just 12 miles from the thriving new “city” of Milton Keynes we have just one or two buses an hour, which run infrequently (and unreliably) and take at least 40 minutes for an indirect route to a location that is still just over a mile from the station. Not exactly convenient – and, at £3 (single fare), not exactly inexpensive either!

Then, after a brisk mile-long walk from the shopping centre to the railway station, I’ll be catching a train to London, tube across London, and then another train to sunny Dorset. The rail journey will take just over 3 and a half hours – which is not bad really (it would take me about 2 and a half to drive using a much more direct route – or around 5 hours by National Express coach) but Cheapest available fare - £56.60 (or is it?)the National Rail Journey Planner tells me that the cheapest available fare is £56.60 with no advance fares available (at which price taking the car is suddenly sounding more economical). (RailEasy and The Trainline also reckon that the lowest cost single fare is £56.50, despite the latter site The Trainline claims to save travellers 39% on average compared with buying a ticket at the station on the dayclaiming to save travellers 39% on average compared with buying a ticket at the station on the day!

Luckily, I spoke to my father, who knows far more about UK railways than would generally be considered healthy – and his advice came up trumps – instead of buying a single ticket for the entire journey, it seems the thing to do is to use the journey planner to work out which trains to catch, and then try again for each leg of the journey.

Using this method, I found I can get the Milton Keynes-London leg for £14.50 (off peak return… not using the return portion), then cross London on the tube for £4 cash or £1.50 with an Oyster card and I can currently buy an advance single from London to my eventual destination in Dorset for £9 or £17 (depending on the time of day I travel). Using this method, £56.60 becomes £25 – and that is not really bad value at all (especially when compared with £41.50 for the significantly slower coach journey).

Why is this relevant on a technology weblog? Well, if a travel website that is incapable of accurately calculating the lowest available fare is not bad enough, the next stage of the process is an IT disaster – the sticking plaster that bonds together the various websites used to provide this “service”. The National Rail website has the ability to hand off to third parties for ticket purchase, which sounds great – web services in action – except that I got more than my fair share of failed fare lookups (retrying seemed to result in success) and when I was passed across to the two train operating companies that I used (London Midland and South West Trains), I had to register with each website individually – despite the underlying infrastructure being hosted under the oddly-named trainsfares.co.uk domain by The Trainline (where I also have an account) and an error page after my session timed out referring to yet another train operating company (with which I do not)! I could almost excuse the National Rail website for being aesthetically dull (I find its basic colour scheme and busy layout presents a navigational nightmare – in web terms rather than its intended purpose as a travel aid!) but the results it produces are not even consistent – the train that I’ll be using for the Milton Keynes to London leg of the journey disappears from the list if I use the earlier and later links to navigate back and forth through the available journey options!

Is it too much to ask that, now that train fares in the UK have (finally) been simplified, the systems should be able: to calculate the the various legs of the journey and find me the absolute lowest fare; reliably integrate to provide consistent results; and, where several train operating companies use the same service provider, for a single online account to be able to buy tickets for the entire rail network?

Maybe I just want too much…

Some more useful Hyper-V links

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Regular readers will have realised by now that the frequency of posts on this blog is almost inversely proportional to the amount of my spare time that the day job eats up and, after a period of intense blogging when I had a fairly light workload, the last couple of weeks have left little time for writing (although James Bannan and I did finally record the pilot episode of our new podcast last night… watch this space for more information).

In the absence of my planned post continuing the series on Microsoft Virtualization and looking at application virtualisation (which will make an appearance, just maybe not until next week), here are a few Hyper-V links that might come in useful (supplementing the original list of Hyper-V links I published back in July):

Recording ringtones for the Apple iPhone

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the western world (well, certainly in the UK), mobile phone ringtones represent a highly profitable market but, as I understand it, no additional revenue is passed to the recording artists – and on that basis I’m not going to line the pockets of music industry executives when I’ve already paid for music once.

One of the advantages of being an iPhone user and having a Mac at my disposal is the ability to record my own Ringtones. Whilst there are commercial products that can do this too (like iToner from Ambrosia Software), if you have Apple GarageBand 4.1.1 or later, you can record your own ringtones (40 seconds or less) and transfer them to iTunes to sync with the phone (as described in Apple support article HT1358 or with screenshots on LifeHacker). If you need to fade the ringtone in/out then adjusting the track volume is described in an AppleInsider forum post.

This capability is not new, and is pretty well documented, but I’ve spent far too much time playing around with it and now I need to go to sleep!