Creating Windows file system shares remotely

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, one of my colleagues came to me with a problem to solve. He wanted a user to be able to create a share remotely (i.e. without logging onto the server console physically or via terminal services). I suggested allowing the user access to a shared folder at a higher level in the directory structure and then, after they had connected to that share, they could create a new subfolder and share it out. Unfortunately, my colleague returned later to say that Windows doesn’t allow sharing of folders when connected via a share so he had to find another way around the issue – he found two possible answers:

Even though rmtshare.exe dates back to the days of Windows NT 4.0, I was able to use it to create a share (and delete it again) on a Windows Server 2003 server from a Windows Vista client (although I did have to elevate my permissions before it ran successfully).

Blog spam

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I like to receive comments on this blog – it’s always good to hear when my ramblings have helped someone out, or if someone has something else to add to something that I’ve written about – but I hate blog spam.

A few months back, someone left a comment on a post pointing to his own website (and then got upset when the Google index appeared to quote him out of context). I felt sorry for him (and the irony is that any links in comments here are tagged with rel=”nofollow” so they don’t increase PageRank and other search engine placement link counts).

This evening, I’ve spent quite a bit of time removing comments from posts that were just blatant links to suspicious websites, so, it’s with regret that I’ve had to enable comment moderation. I won’t screen comments (except to remove the obvious spam) but please bear with me if it takes a while for a comment to appear on the site – sometimes it might take a couple of days for me to approve a comment whilst other times it might be a few minutes. I still allow anonymous comments and I haven’t yet resorted to word verification – let’s hope I don’t have to, but please bear with me.

Whilst on the subject of spam, to all the people of the world who send e-mail because they think I need medication to help with (erhum) “personal problems”, I have a young son and another baby on the way so don’t think there are any issues there. Also, I don’t need cheap software, loans, or advice on hot stocks. All you’re doing is giving me some messages against which to test the intelligent message filter in Exchange Server 2003 (more on that soon).

Grrr.

New tools from Quest for Exchange Server 2007

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Exchange Server 2007 has the potential to shake up messaging but there is no direct upgrade path for those organisations still running Exchange Server 5.5 (and there are a surprisingly high number of these). All is not lost though as, earlier today, I heard Joe Baguley, Global Product Director for Quest Software, give a presentation of the various tools that they now have on offer (the list is impressive) and, interestingly, Quest plan to have Exchange Server 5.5-2007 migration tools available when Exchange Server 2007 is released, as well as tools for migrating Exchange public folders to SharePoint.

Why RAID alone is not the answer for backups

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently came across Gina Trapani’s article on the importance of backing up (the comments are worth a read too). I hear what she’s saying – a couple of years ago I very nearly lost a lot of data when a hard disk died and today I have far more important stuff on disk (like all of my recent photography – including irreplaceable pictures of my son – a digitised music collection and years’ worth of accumulated information), all spread across nearly a terabyte of separate devices.

As we place more and more emphasis on our digital lifestyle, the amount of data stored will continue to grow and that creates a problem, especially for home and small business users.

Optical media degrades over time and since the hard disk I bought for backups is now in daily use with my new Macintosh computer, I need to implement a decent backup regime. As disk sizes increase, a single disk seems like putting all my eggs in one basket, but I also hear people talking about how RAID is the answer.

No it’s not.

The most common RAID levels in use are 0 (striping), 1 (mirroring) and 5 (striped set with parity). RAID 0 does not provide any fault tolerance, RAID 5 needs at least 3 disks – too much for most home and home office setups – that leaves just RAID 1. Mirrors sometimes fail and when they do, they can take all of the data with them. Then there’s the additional issue of accidental damage (fire, flood, etc.). What’s really required (in a home scenario), is two or more removable hard disks, combined with use of a utility such as rsync (Unix) or SyncToy (Windows) to automate frequent backups, with one of the disks kept off site (e.g. with a family member) and frequent disk rotation.

In an enterprise environment I wouldn’t consider implementing a server without some form of RAID (and other redundant technologies) installed; however I’d also have a comprehensive backup strategy. For homes and small businesses RAID is not the answer – what’s really required is a means of easily ensuring that data is secured so that if a disaster should occur, then those precious files will not be lost forever.

Migrating an iTunes music library between PCs

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Until recently, I’ve been running iTunes on a Windows XP PC but I’m in the process of migrating to a Mac OS X system. Whilst most data transfers have been straightforward, I found that, after moving the files to a disk that could be accessed by both the Mac and a PC (i.e. a FAT32-formatted external hard disk), getting iTunes to recognise my library was challenging. I’m sure it’s quite a common scenario so I thought I’d post what I did here so that it can be of use to others.

Whilst my scenario involved moving from iTunes 6.0.4 on a PC to 6.0.5 on a Mac, the principle is the same for moving iTunes music libraries between any two PCs (Mac OS X or Windows).

Apple’s advice for moving your iTunes Music folder is okay for moving files on the same system but their advice for switchers to transfer their iTunes Music Library files from PC to Mac just didn’t work for me (well, it works, sort of, but simply importing the music files into iTunes will lose playlists, history, ratings, etc. and importing the music library file itself will retrieve those items but won’t find the music files because they have moved location). I need to keep the selections because that’s how I determine what will be synchronised with my iPod – quite simply my 47GB music collection will not fit on a 4GB iPod Mini!

Luckily, the extensive article on moving iTunes libraries whilst preserving library data at the HiFi Blog gave me all the necessary steps (although they focus on libraries where iTunes is not used to organise the music – I let iTunes handle that for me). After setting iTunes preferences to point to a folder on my external hard disk (on the Advanced page, under General), I quit iTunes and edited the iTunes Music Library.xml and iTunes Library files that reside in Macintosh HD/Users/username/Music/iTunes/ (even though the music files are on the external hard disk, iTunes keeps its database files in the main user data location), removing all binary data from the iTunes Library file to leave a 0KB length file and replacing all instances of the original library location in iTunes Music Library.xml with the new library location (for me this was from file://localhost/C:Documents%20and%20Settings/Mark/My%20Documents/My%20Music/iTunes/iTunes%20Music/ to file://localhost/Volumes/EXTERNAL%20HD/Music/iTunes/iTunes%20Music/). I found the easiest way to edit the files was on the PC (using WordPad – depending on the size of the music library, NotePad may not cope with the file sizes involved). It’s also worth noting that on a PC, the iTunes Library file has a .ITL extension.

After making sure that the edited files were back in the Macintosh HD/Users/username/Music/iTunes folder and starting iTunes, I was greeted with an Importing iTunes Music Library.xml message before:

Organizing Files

The file “iTunes Library” does not appear to be a valid music library file. iTunes has attempted to recover your music library and has renamed this file to “iTunes Library (Damaged)”.

Actually that message is incorrect. On my system, there is no iTunes Library (Damaged) file but there is a Previous iTunes Libraries folder, which contains a file called iTunes Library 2006-7-12.

iTunes then continued to analyse and determine the song volume for 2344 of the 6766 items in my music library (I’m not sure what this actually means and it seems strange that it was not for the entire music collection) after which it was available for use as normal (almost) with all my tracks, playlists, selections, date last played, etc. I said almost normal because there are a couple of additional playlists (Podcasts and Videos) and the Podcast subscriptions don’t get migrated but that’s easy to fix. Again, it was the HiFi Blog article that helped me out – browse the library to view all music files with a genre of Podcast and drag them onto the Podcasts heading in the source column before clicking on resubscribe for each Podcast to enable new downloads (the existing downloads should all still be available).

The next step was to hook up my iPod which synchronised normally (I vaguely remember selecting that it was connected to a Windows PC the first time I set it up and expected to have to do some reconfiguration for the Mac but it seems that was not required). The only exception was for my purchased music, for which I received the following message:

Some of the songs in the iTunes music library, including the song “songname“, were not copied to the iPod “ipodname” because you are not authorised to play them on this computer.

I found this strange because I’d already accessed the iTunes Music store from iTunes using my Apple ID, and although there was a “Deauthorize Computer…” option on the Advanced menu, I couldn’t see an equivalent option to authorise it (so I naturally assumed it was already authorised). Attempting to access my purchased music in Front Row gave a better clue:

This computer is not authorized to play the selected song.

To authorize your computer, select the song in iTunes and enter the account name and password used to purchase the song from the iTunes Music Store.”

Sure enough, this did the trick, advising me that I had 2 out of a maximum of 5 computers authorised for my music and then allowing me to both play the purchased songs and synchronise them with my iPod.

After running with iTunes on my Mac for a few days now, everything seems to be working okay. The only remaining step is to deauthorise the original Windows XP PC from where I copied my music.

Microsoft’s digital identity metasystem

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

After months of hearing about Windows Vista eye candy (and hardly scraping the surface with anything of real substance with regards to the operating system platform), there seems to be a lot of talk about digital identity at Microsoft right now. A couple of weeks back I was at the Microsoft UK Security Summit, where I saw Kim Cameron (Microsoft’s Chief Architect for identity and access) give a presentation on CardSpace (formerly codenamed “InfoCard”) – a new identity metasystem contained within the Microsoft .NET Framework v3.0 (expected to be shipped with Windows Vista but also available for XP). Then, a couple of days ago, my copy of the July 2006 TechNet magazine arrived, themed around managing identity.

This is not the first time Microsoft has attempted to produce a digital identity management system. A few years back, Microsoft Passport was launched as a web service for identity management. But Passport didn’t work out (Kim Cameron refers to it as the world’s largest identity failure). The system works – 300 million people use it for accessing Microsoft services such as Hotmail and MSN Messenger, generating a billion logons each day – but people don’t want to have Microsoft controlling access to other Internet services (eBay used Passport for a while but dropped it in favour of their own access system).

Digital identity is, quite simply, a set of claims made about a subject (e.g. “My name is Mark Wilson”, “I work as a Senior Customer Solution Architect for Fujitsu Services”, “I live in the UK”, “my website is at http://www.markwilson.co.uk/”). Each of these claims may need to be verified before they are acted upon (e.g. a party to whom I am asserting my identity might like to check that I do indeed work where I say I do by contacting Fujitsu Services). We each have many identities for many uses that are required for transactions both in the real world and online. Indeed, all modern access technology is based on the concept of a digital identity (e.g. Kerberos and PKI both claim that the subject has a key showing their identity).

Microsoft’s latest identity metasystem learns from Passport – and interestingly, feedback gained via Kim Cameron’s identity weblog has been a major inspiration for CardSpace. Through the site, the identity community has established seven laws of identity:

  1. User control and consent.
  2. Minimal disclosure for a defined use.
  3. Justifiable parties.
  4. Directional identity.
  5. Pluralism of operators and technologies.
  6. Human integration.
  7. Consistent experience across contexts.

Another area where CardSpace fundamentally differs from Passport is that Microsoft is not going it alone this time – CardSpace is based on WS-* web services and other operating system vendors (e.g. Apple and Red Hat) are also working on comparable (and compatible) solutions. Indeed, the open source identity selector (OSIS) consortium has been formed to address this technology and Microsoft provides technical assistance to OSIS.

The idea of an identity metasystem is to unify access and prevent applications from the complexities of managing identity, but in a manner which is loosely coupled (i.e. allowing for multiple operators, technologies and implementations). Many others have compared this to the way in which TCP/IP unified network access, which paved the way for the connected systems that we have today.

The key players in an identity metasystem are:

  • Identity providers (who issue identities).
  • Subjects (individuals and entities about which claims are made).
  • Relying parties (require identities).

Each relying party will decide whether or not to act upon a claim, depending on information from an identity provider. In the real world scenario, that might be analogous to arriving at a client’s office and saying “Hello, I’m Mark Wilson from Fujitsu Services. I’m here to visit your IT Manager”. The security/reception staff may take my word for it (in which case this is self-issued identity and I am both the subject and the provider) or they may ask for further confirmation, such as my driving license, company identity card, or a letter/fax/e-mail inviting me to visit.

In a digital scenario the system works in a similar manner. When I log on to my PC, I enter my username to claim that I am Mark Wilson but the system will not allow access until I also supply a password that only Mark Wilson should know and my claims have been verified by a trusted identity provider (in this case the Active Directory domain controller, which confirms that the username and password combination matches the one it has stored for Mark Wilson). My workstation (the relying party) then allows me access to applications and data stored on the system.

In many ways a username and password combination is a bad identity analogy – we have trained users to trust websites that ask them to enter a password. Imagine what would happens if I was to set up a phishing site that asks for a password. Even if the correct password is entered then the site would claim that it was incorrect. A typical user (and I am probably one of those) will then try other passwords – the phishing site now has an extensive list of passwords available which can then be used to access other systems pretending to be the user whose identity has been stolen. A website may be protected by many thousands miles of secure communications but as Kim Cameron put it, the last one metre of the connection is from the computer to the user’s head (hence identity law number 6 – human integration) – identity systems need to be designed in a way that is easy for users to make sense of, whilst remaining secure.

CardSpace does this by presenting the user with a selection of digital identity cards (similar to the plastic cards in our wallets) and highlighting only those that are suitable for the site. Only publicly available information is stored with the card (so that should hold phishers at bay – the information to be gained is useless to them) and because each card is tagged with an image (and only appropriate cards are highlighted for use), I know that I have selected the correct identity (why would I send my Government Gateway identity to a site that claims to be my online bank?). Digital identities can also be combined with other access controls such as smartcards. The card itself is just a user-friendly selection mechanism – the actual data transmitted is XML-based.

CardSpace runs in a protected subsystem (similar to the Windows login screen) – so when active there is no possibility of another application (e.g. malware) gaining access to the system or of screenscraping taking place. In addition, user interaction is required before releasing the identity information.

Once selected, services that require identities can convert the supplied token between formats using the WS-Trust service for encapsulating protocol and claims transformation. For negotiations, WS-MetadataExchange and WS-SecurityPolicy are used. This makes the Microsoft implementation fully interoperable with other identity selector implementations, with other relying party implementations and with other identity provider implementations.

Microsoft is presently building a number of components to its identity metasystem:

  • CardSpace identity selector (usable by any application, included within .NET Framework v3.0 and hardened against tampering and spoofing).
  • CardSpace simple self-issued identity provider (makes use of strong PKI so that the user does not disclose passwords to relying parties).
  • Active Directory managed identity provider (to plug corporate users in to the metasystem via a full set of policy controls to manage the use of simple identities and Active Directory identities).
  • Windows Communication Foundation (for building distributed applications and implementing relying party services.

Post-Windows Vista, we can expect the Windows Login to be replaced with an CardSpace-based system. In the meantime, to find out more about Microsoft’s new identity metasystem, check out Kim Cameron’s identity blog, The Windows CardSpace pages and David Chappell’s Introducing InfoCard article on MSDN, and the July 2006 issue of TechNet magazine.

Windows Mobile device security

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the years, I’ve attended various presentations featuring mobile access to data but most of them have been along the lines of “look at all this cool stuff I can do”. Last week I was at the Microsoft IT Security Summit and saw a slightly different angle on things as Jason Langridge presented a session on securing Windows Mobile devices – something which is becoming ever more important as we increasingly use mobile devices to access data on the move.

It’s surprising just how few people make any effort to secure their device and, according to Microsoft, only 25% of mobile users set even a password/PIN. Even so, that’s just the tip of the iceberg – mobile data exists in a variety of locations (including paper!) and whilst many IT Managers are concerned about data on smartphones, PDAs and USB devices, paradoxically, many notebook PCs have an unencrypted hard disk containing many gigabytes of data. A mobile security policy is different to a laptop security policy – and it’s more than just a set of technology recommendations – it should involve assessing the risk and deciding what data can safely be lost and what can’t. Ultimately there is a fundamental trade-off between security, usability and cost.

Potential mobile device security threats can come from a number of sources, including malware from applications of unknown origin, viruses, loss/theft, unauthorised access via a personal area network, wireless LAN, wireless WAN, LAN or through synchronisation with a desktop/notebook PC. Each of these represents a subsequent risk to a corporate network.

The Windows Mobile platform supports secure device configuration through 43 configuration service providers (CSPs). Each CSP is an XML document that can be used to lock down a device, for example to disable Bluetooth:


The diagram below illustrates the various methods of provisioning and control for mobile devices, from direct application installation or desktop ActiveSync, through in-ROM configuration to over-the-air provisioning from Exchange Server, WAP or the Open Mobile Alliance (OMA) industry standard for mobile device management.Mobile device provisioning and control methods

The most secure method of configuring a mobile device is via a custom in-ROM configuration – i.e. hard-coded XML in ROM, run during every cold boot. This method needs to be configured by the OEM or system integrator who creates the device image.

Secure system updates provide for after-market updates to device configuration, even when mobile. Image updates (a new feature for Windows Mobile 5.0) can update system files ranging from the full image to a single file including handling dependency and conflict resolution. Controlled by the OEM or the mobile operator, image update packages are secured using cryptographic signatures.

Probably the simplest way to provide some form of perimeter security is using a PIN code or strong password (depending on the device), incorporating an exponential delay with each incorrect password. Such arrangements can now be enforced using the tools provided in Exchange Server 2003 SP2 and/or the Systems Management Server device management feature pack. Taking a look at Exchange Server 2003 SP2, it not only delivers improved access to Outlook data when mobile with reduced bandwidth usage and latency, direct push e-mail, additional Outlook properties and global address list lookup; but it also provides security policy provisioning for devices with password restrictions, certificate authentication, S/MIME and the ability to locally or remotely reset a mobile device.

Windows Mobile does not encrypt data on devices due to the impact on performance; however it does include a cryptographic API and SQL CE/SQL Mobile access provides 128-bit encryption. If data encryption on the device is required (bearing in mind that the volume of data involved is small and the observation that many notebook PCs representing a far larger security risk are unsecured) then third party solutions are available.

Mobile applications can be secured for both installation and execution. For installation, the .CAB file containing the application can be signed and is validated against certificates in the device certificate store. Similarly, .EXE/.DLL files (and .CPL files, which are a special .DLL) need to be signed and validated for execution. Users are asked to consent to install or execute signed code, and if consent is given, a hash of each file is added to a prompt exclusion list to avoid repeated prompts. Copying executable files to the device is not the same as installing them and will result in an execution prompt.

Windows Mobile includes a two-tier application execution control with the 1-tier mode including either blocking execution completely or running as privileged/trusted. If 2-tier mode is in use, an application could be signed for one of two different trust levels – either privileged, with access to registries, APIs and hardware interfaces; or unprivileged, with applications restricted from certain operations. Smartphones support 1- or 2-tier operation; whereas PocketPC devices are limited to a single tier.

Whilst application installation security can provide good protection against viruses and other malware, there are also anti-virus APIs built in to Windows Mobile with solutions available from a variety of vendors.

As new wireless network technologies come onstream, it is important to consider wide area network security too. Windows Mobile supports NTLM v2 as well as SSL, WPA and 802.1x user authentication using passwords or certificates. VPN support is also provided. From a personal area network (Bluetooth/infrared) perspective, peer-to-peer connections require interaction in order to accept data and CSPs are available to block both Bluetooth and IrDA object exchange (OBEX). By default, Bluetooth is turned off on Windows Mobile 5.0 devices, giving out-of-the-box protection against bluesnarfing (gaining access to personal information data) and bluejacking (unauthorised sending of messages to a device).

Jason summarised his presentation by pointing out that security is often used as a convenient excuse not to deploy mobile technology when what is really required is to establish a mobile security policy and to educate users.

A risk assessment must be made of each security scenario and risk management should be based on that assessment. Solutions should be automatically enforced but must also be acceptable to users (e.g. complex passwords will not work well on a smartphone!). Security is a combination of both a policy and technology but the policy must come before the technology choice (only when it is known what is to be protected from whom in which situations can it be decided how to secure it).

Suggested further reading
Microsoft mobile security white paper
Windows Mobile network security white paper

The week when my digital life was on hold

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I wrote about the arrival of my new Mac Mini, along with claims that “[my] digital life starts here”. Thankfully, unlike a chunk of my computing resource, my physical life doesn’t rely on Apple Support.

I was experiencing problems maintaining a steady Ethernet connection, initially whilst downloading OS X updates from Apple and then whilst copying data from a Windows XP PC. After a random time the connection would drop, with receive errors shown in the OS X Network Utility. The only way to break this cycle was to restart the computer after which time the network was once again available.

I spent almost two hours on the phone to Apple support staff, who were generally helpful, but seemed to be relying on scripted support sequences and an internal knowledge base. It seemed that all Apple really wanted to do was rule out the Apple hardware and point the blame on something else on the network. Sure enough, I couldn’t replicate the problem on a direct crossover cable (100Mbps full duplex), or via a 10Mbps half duplex hub, 100Mbps full duplex switch – just via a 100Mbps half duplex hub but crucially, the other devices on the network were all able to communicate with each other via the same hub with no errors at all. Only the Mac had a problem.

I finally snapped and said I wanted to return my shiny aluminium paperweight when the support analyst suggested I checked the firewall settings on the PC from where I was trying to copy data (I pointed out that if there was a firewall issue then no data at all would be copied – not several hundred megabytes before crashing and in any case the problem existed downloading updates from Apple’s website too).

After being advised to take my Mac to a hardware specialist 30 miles away (to see if there were any problems communicating with another Mac), I decided to rebuild it from the operating system install disks. The 14 Mac updates that took so long to install before (now 13 as one was a permanent BIOS update) were applied with just one error. It seemed that the problem was with the operating system as installed in the factory (presumably not a DVD installation, but performed using disk duplication software). Unfortunately, although it seems to take a lot longer before crashing now, the problem is still there when I connect via the hub, so I’ve added a switch just for the Mac (everything else is as it was before).

One thing I should say is that the guys who responded to my call for help on the Apple discussion forums were really helpful (I guess switching from Windows to OS X is something which Mac users would like to encourage).

So, now I’m up and running and my digital life can start. Just as well, because my new Fujitsu-Siemens S20-1W monitor turned up yesterday – 20.1″ of widescreen vision, at a resolution of 1680×1050, in a brushed aluminium case (no plastic here) and almost £200 less expensive than the Apple equivalent (I got it from Dabs.com for £365).

Fujitsu-Siemens S20-1W

Net neutrality is really important

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The Internet was brought to us by the United States (when it was probably the best thing to come out of the cold war). Then Sir Tim Berners-Lee invented the web. Now we all rely on it and the telcos want to set up a two-tier Internet (maybe that really should be called Web 2.0!) with additional charges for access to high-speed content provision (we already pay more if we want a faster connection, now we may have to pay “tolls” for the extra lane on the “information superhighway”).

The cartoon below is the best illustration I’ve seen so far of what they are trying to do:

Net Neutrality

Various websites feature a recording of Senator Ted Stevens speaking in the US Senate on net neutrality (if you thought George Bush or Tony Blair were bumbling idiots, beleive me they have nothing on this guy). It’s worth listening to the whole clip but particularly from the 8′ 45” seconds point (“the Internet is a series of tubes” etc.) to see just what a poor grip on technology the US Senate has on this subject.

For, quite simply, the best-written desciption of why this is a big problem that affects all Internet users, read Sir Tim Berners-Lee’s blog post on net neutrality. There’s more information at Save the Internet.

Web 2.0 is being mis-sold

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not sure if its a good thing, but recently, I began to read more computing magazines. Many years back, when I was learning my trade, I used to read PC Plus but since then I’ve avoided the publications found in your average WH Smith because I found them to be far too consumer-focused and I consider that much of the advice given is far to simplistic and only tells half the story.

Recently, I’ve found that whilst reading trade publications such as IT Week will give me what I need to know from an industry perspective, I’m using more and more of the consumer-focused functionality built into the software on my PCs. I’ve also been relearning my trade as I try to become familiar with two non-Microsoft operating systems (SUSE Linux and Apple OS X) whilst rekindling my lost knowledge of PC hardware, so once again I have been reading the magazines that I shunned for so long. The trouble is, with their £6 cover prices (to justify the accompanying DVD which rarely has anything I want on it (or that I can’t download for free on the ‘net), I only really want to buy one a month and it seems to be difficult to get all I want from a single publication.

I did read Linux Format for a few months, but soon got tired of the Microsoft-bashing that took place (often from journalists who either run Windows under duress to work around a specific issue or never run Windows and associated applications because they can get all they need from an open source platform). I don’t believe that everything Microsoft does is good (far from it), but I do get annoyed when I see someone slating a perfectly good product because it was written by Microsoft and therefore it must be “evil”. Recently, my purchase of a Mac has led me to buy Mac Format and iCreate but by far and away the best general (cross-platform) PC magazine that I’ve found has been Personal Computer World.

One of the advantages of PCW is that it’s published by VNU business publications, so many of the writers are also contributors to the same trade publications that I respect already. I also like PCW because it has a mixture of hardware and software articles, just the right amount of advertising, and covers developments for all PC operating systems (although there is a slight bias towards Windows, representing its market position and therefore the largest chunk of the magazine’s readership); however, I still get annoyed by half-correct advice (often edited to save space, losing its full meaning in the process) and articles that have jumped on one bandwagon or another and seem to have lost the plot on the way. One of these was an article on Web 2.0 technologies in the July 2006 edition of PCW.

Web 2.0 is the current buzzword used to describe web services (rich websites that provide a service, that is consumed either by another application, or directly by a user via a web browser). I first came across the concept back in 2001 when Microsoft announced their .NET vision at a TechEd conference but others have extended the web services vision to include LAMP (Linux-Apache-MySQL-Perl/PHP/Python) applications and other platforms, in the process generating much media hype about the next Internet.

And that’s exactly my point. It’s all hype. There is no new Internet. Sir Tim Berners-Lee invented the world-wide web and browser-based applications have become widespread. Now that same platform is being used to develop new, rich, server-based applications and people are heralding a new dawn (or even worse, building up for a second .com frenzy). What has become known as Web 2.0 is not the next web, it’s simply a development of the vision that Berners-Lee had 15 years back, but now we’re all ready to use it (back in the early 1990s many business applications were character based and ran on an expensive server, whilst PC users were only just getting used toward processing and a GUI interface – we were only just getting used to the move from standalone PCs to LANs and would have struggled with the concept of massively connected systems at providing and consuming services from one another).

In fairness, Tim O’Reilly’s description of Web 2.0 is an interesting and thought-provoking article which I agree with in many ways but for every O’Reilly article there are a bunch of journalists who herald Web 2.0 as the “webtop”, replacing the desktop and even suggesting that Windows Vista will be the last edition of Windows. Whilst I have no doubt that Web 2.0 services will win the hearts and minds of consumers (hence why Microsoft is developing the Windows Live platform to complete with the giant of web services – Google) we are not about to see the death of the PC, to be replaced by a “dumb” browser terminal (anybody remember how thin clients were going to displace rich applications on the desktop?). Why not? Read on and I’ll explain why not.

Web services are a great idea, joining islands of information to enhance the Internet experience. We’ve already seem the development from static information pages to transactional data and web services take this to the next level; but despite the altruistic intentions of many Web 2.0 companies they also have to represent a viable business proposition. That means driving a revenue stream and (eventually) making a profit. If the venture capital that finances the web service (which is hemorrhaging cash until it gains sufficient presence to establish a revenue stream) dries up, where does my data go? I’m pretty sure it won’t find its way back to me but it may well end up at the next clearance sale of used IT equipment. My personal data may not be significant but if it was to fall into the wrong hands, it may include information that allows someone to steal my digital identity. Potentially worse, if one or more small businesses rely on the failed web service to operate, what happens to their data (and their business)?

I like my data to live on my computers – where I control it. Sure, maybe I’m a control freak, but I’m not alone in this view. I might put a few digital photos up on Flickr but I keep the originals where I know they are safe. I may trust my ISP to host a website for me, but I have an offline copy too. I don’t use my Google Mail account because I don’t want Google using my data to build a profile of my interests. And, if I fail to take backups, then I only have one person to blame when I lose my data.

Those are just a few examples from a tech-savvy consumer point of view but what about the corporate or government environment? You might accept that your bank, local council, and major government departments outsource their IT operations to an IT services company (they almost certainly do) but they will also make sure the necessary controls are in place. Their website might include functionality consumed from a web services provider but I wouldn’t expect confidential documents to be edited using Writely (the online word processor, now owned by Google), or financial data to be controlled using the Google Spreadsheets, any more than I would expect a business associate to contact me using a Hotmail e-mail address. Corporate and government organisations may consume some web services and will almost certainly provide more but they will not turn their internal operations over to the webtop.

Web 2.0 supporters claim that because all applications run in a browser then there will be less application support issues. Hmm. What does the browser run on? Yes, a lightweight operating system could well be developed to support just a browser but haven’t we all experienced buggy websites using dodgy scripting?

As long as corporates still use PCs as we know them today, there will be a market for Microsoft to sell Windows and Office for the desktop, along with supporting server infrastructure and application development platforms (including .NET – which is, after all, Microsoft’s vision for web services). Web 2.0’s webtop may well be on its way up, but it’s certainly not a replacement for the desktop.