Looking forward to Windows Server 2008: Part 2 (Setup and Configuration)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in October, I started to look at the next version of Microsoft’s server operating system – Windows Server 2008. In that post I concentrated on two of the new technologies – Server Core and Windows Server Virtualization (since renamed as Hyper-V).

For those who have installed previous versions of Windows Server, Windows Server 2008 setup will be totally new. Windows Vista users will be familiar with some of the concepts, but Windows Server takes things a step further with simplified configuration and role-based administration.

Using a technology known as Windows PE, the new setup model allows multiple builds to be stored in a single image (using the .WIM file format). Because many of these builds will share the same files, single instance storage is used to reduce the volume of disk space required, allowing six operating system versions to fit into one DVD image (with plenty of free space).

The first stage of the setup process is about collecting information. Windows Setup now asks fewer questions and instead of being spread throughout the process (anybody ever left a server installation running and then returned to find it had stopped half way through for input of some networking details?) the information is all gathered at this first stage in the process. After gathering details for the language, time and currency, keyboard, product key (which can be left and entered later), version of Windows to install, license agreement and selection of a disk on which to install the operating system (including options for disk management), Windows Setup is ready to begin the installation. Incidentally, it’s probably worth noting that SATA disk controllers have been problematic when setting up previous versions of Windows. Windows Server 2008 had no issues with the motherboard SATA controller on the Dell server that I used for my research.

After collecting information, Windows Setup moves on to the actual installation. This consists of copying files, expanding files (which took about 10 minutes on my system), installing features, installing updates, two reboots and completing installation. One final reboot brings the system up to the login screen after which Windows is installed. On my server (with a fast processor, but only 512MB of RAM) the whole process took around 20 minutes.

At this point you may be wondering where the computer name, domain name, etc. is entered. Windows Setup initially installs the server into a workgroup (called WORKGROUP) and uses an automatically generated computer name. The Administrator password must be changed at first logon, after which the desktop is prepared and loaded.

Windows Server 2003 included an HTML application called the Configure Your Server Wizard and service pack 1 added the post-setup security updates (PSSU) functionality to allow the application of updates before enabling non-essential services. In Windows Server 2008 this is enhanced with a feature called the Initial Tasks Configuration Wizard. This takes an administrator through the final steps in setup (or initial tasks in configuration):

  1. Provide computer information – configure networking, change the computer name and join a domain.
  2. Update this server – enable Automatic Updates and Windows Error Reporting, download the latest updates.
  3. Customise this server – add roles or features, enable Remote Desktop, configure Windows Firewall (now enabled by default).

Roles and Features are an important change in Windows Server 2008. The enhanced role-based administration model provides a simple approach for an administrator to install Windows components and configure the firewall to allow access in a secure manner. At release candidate 1 (RC1), Windows Server 2008 includes 17 roles (e.g. Active Directory Domain Services, DHCP Server, DNS Server, Web Server, etc.) and 35 features (e.g. failover clustering, .NET Framework 3.0, Telnet Server, Windows PowerShell).

Finally, all of the initial configuration tasks can be saved as HTML for printing, storage, or e-mailing (e.g. to a configuration management system).

Although Windows Server 2008 includes many familiar Microsoft Management Console snap-ins, it includes a new console which is intended to act as a central point of administration – Server Manager. Broken out into Roles, Features, Diagnostics (Event Viewer, Reliability and Performance, and Device Manager), Configuration (Task Scheduler, Windows Firewall with Advanced Security, Services, WMI Control and Local Users and Groups)and Storage (Windows Server Backup and Disk Management), Server Manager provides most of the information that an administrator needs – all in one place.

It’s worth noting that the Initial Tasks Configuration Wizard and Server Manager do not apply for Server Core installations. Server Manager can be used to remotely administer a computer running Server Core, or hardcore administrators can configure the server from the command line.

So that’s Windows Server 2008 setup and configuration in a nutshell. Greatly simplified. More secure. Much faster.

Of course, there are options for customising Windows images and pre-defining setup options but these are beyond the scope of this article. Further information can be found elsewhere on the ‘net – I recommend starting with the Microsoft Deployment Getting Started Guide.

Windows Server 2008 will be launched on 27 February 2008. It seems unlikely that it will be available for purchase in stores at that time; however corporate users with volume license agreements should have access to the final code by then. In the meantime, it’s worth checking out Microsoft’s Windows Server 2008 website and the Windows Server UK User Group.

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

Looking forward to Windows Server 2008: Part 1 (Server Core and Windows Server Virtualization)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst the first two posts that I wrote for this blog were quite generic, discussing such items as web site security for banks and digital rights management, this time I’m going to take a look at the technology itself – including some of the stuff that excites me right now with Microsoft’s Windows Server System.

Many readers will be familiar with Windows XP or Windows Vista on their desktop but may not be aware that Windows Server operating systems also have a sizable chunk of the small and medium size server market.   This market is set to expand as more enterprises implement virtualisation technologies (running many small servers on one larger system, which may run Windows Server, Linux, or something more specialist like VMware ESX Server).

Like XP and Vista, Windows 2000 Server and Advanced Server (both now defunct), Windows Server 2003 (and R2) and soon Windows Server 2008 have their roots in Windows NT (which itself has a lot in common with LAN Manager).  This is both a blessing and a curse as while the technology has been around for a few years now and is (by and large) rock solid, the need to retain backwards compatibility can also mean that new products struggle to balance security and reliability with legacy code.

Microsoft is often criticised for a perceived lack of system stability in Windows but it’s my experience that a well-managed Windows Server is a solid and reliable platform for business applications.  The key is to treat a Windows Server computer as if it were the corporate mainframe rather than adopting a   personal computer mentality for administration.  This means strict policies controlling the application of software updates and application installation as well as consideration as to which services are really required.

It’s this last point that is most crucial.  By not installing all of the available Windows components and by turning off non-essential services, it’s possible to reduce the attack surface for any would-be hacker.  A reduced attack surface not only means less chance of falling foul of an exploit but it also means less patches to deploy.  It’s with this in mind that Microsoft produced Windows Server Core – an installation option for the forthcoming Windows Server 2008 product (formerly codenamed Longhorn Server).

As the name suggests, Windows Server Core is a version of Windows with just the core operating system components and a selection of server roles available for installation (e.g. Active Directory domain controller, DHCP server, DNS server, web server, etc.).  Server Core doesn’t have a GUI as such and is entirely managed from a command prompt (or remotely using standard Windows management tools).  Even though some graphical utilities can be launched (like Notepad), there is no Start Menu, no Windows Explorer, no web browser and, crucially, a much smaller system footprint.  The idea is that core infrastructure and application servers can be run on a server core computer, either in branch office locations or within the corporate data centre and managed remotely.  And, because of the reduced footprint, system software updates should be less frequent, resulting in improved server uptime (as well as a lower risk of attack by a would-be hacker).

If Server Core is not exciting enough, then Windows Server Virtualization should be.  I mentioned virtualisation earlier and it has certainly become a hot topic this year.  For a while now, the market leader (at least in the enterprise space) has been VMware (and, as Tracey Caldwell noted a few weeks ago, VMware shares have been hot property), with their Player, Workstation, Server and ESX Server products.  Microsoft, Citrix (XenSource) and a number of smaller companies have provided some competition but Microsoft will up the ante with Windows Server Virtualization, which is expected to ship within 180 days of Windows Server 2008.  No longer running as a guest on a host operating system (as the current Microsoft Virtual Server 2005 R2 and VMware Server products do), Windows Server Virtualization will directly compete with VMware ESX Server in the enterprise space, with a totally new architecture including a thin “hypervisor” layer facilitating direct access to virtualisation technology-enabled hardware and allowing near-native performance for many virtual machines on a single physical server.  Whilst Microsoft is targeting the server market with this product (they do not plan to include the features that would be required for a virtual desktop infrastructure, such as USB device support and sound capabilities) it will finally establish Microsoft as a serious player in the virtualisation space (even as the market leader within a couple of years).  Furthermore, Windows Server Virtualization will be available as a supported role on Windows Server Core; allowing for virtual machines to be run on an extremely reliable and secure platform.  From a management perspective there will be a new System Center product – Virtual Machine Manager, allowing for management of virtual machines across a number of Windows servers, including quick migration, templated VM deployment and conversion from physical and other virtual machine formats.

Windows Server Core and Windows Server Virtualization are just two of the major improvements in Windows Server 2008.  Over the coming weeks, I’ll be writing about some of the other new features that can be expected with this major new release.

Windows Server 2008 will be launched on 27 February 2008.  It seems unlikely that it will be available for purchase in stores at that time; however corporate users with volume license agreements should have access to the final code by then.  In the meantime, it’s worth checking out Microsoft’s Windows Server 2008 website and the Windows Server UK User Group.

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

A call for open standards in digital rights management

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Digital rights management (DRM) is a big issue right now. Content creators have a natural desire to protect their intellectual property and consumers want easy access to music, video, and other online content.

The most popular portable media player is the Apple iPod, by far the most successful digital music device to date. Although an iPod can play ordinary MP3 files, its success is closely linked to iTunes’ ease of use. iTunes is a closed system built around an online store with (mostly) DRM-protected tracks using a system called FairPlay that is only compatible with the iTunes player or with an iPod.

Another option is to use a device that carries the PlaysForSure logo. These devices use a different DRM scheme – Windows Media – this time backed by Microsoft and its partners. Somewhat bizarrely, Microsoft has also launched its own Zune player using another version of Windows Media DRM – one that’s incompatible with PlaysForSure.

There is a third way to access digital media – users can download or otherwise obtain DRM-free tracks and play them on any player that supports their chosen file format. To many, that sounds chaotic. Letting people download content without the protection of DRM! Surely piracy will rule and the copyright holders will lose revenue.

But will they? Home taping has been commonplace for years but there was always a quality issue. Once the development of digital music technologies allowed perfect copies to be made at home the record companies hid behind non-standard copy prevention schemes (culminating in the Sony rootkit fiasco) and DRM-protected online music. Now video content creators are following suit, with the BBC and Channel 4 both releasing DRM-protected content that will only play on some Windows PCs. At least the BBC does eventually plan to release a system that is compatible with Windows Vista and Macintosh computers but for now, the iPlayer and 4 on Demand are for Windows XP users only.

It needn’t be this way as incompatible DRM schemes restrict consumer choice and are totally unnecessary. Independent artists have already proved the model can work by releasing tracks without DRM. And after the Apple CEO, Steve Jobs, published his Thoughts on Music article in February 2006, EMI made its catalogue available, DRM-free, via iTunes, for a 25% premium.

I suspect that the rest of the major record companies are waiting to see what happens to EMI’s sales and whether there is a rise in piracy of EMI tracks; which in my opinion is unlikely. The record companies want to see a return to the 1990s boom in CD sales but that was an artificial phenomenon as music lovers re-purchased their favourite analogue (LP) records in a digital (Compact Disc) format. The way to increase music sales now is to remove the barriers online content purchase.

  • The first of these is cost. Most people seem happy to pay under a pound for a track but expect album prices to be lower (matching the CDs that can be bought in supermarkets and elsewhere for around £9). Interestingly though, there is anecdotal evidence that if the price of a download was reduced and set at around $0.25 (instead of the current $0.99), then people would actually download more songs and the record companies would make more money.
  • Another barrier to sales is ease of use and portability. If I buy a CD (still the benchmark for music sales today), then I only buy it once regardless of the brand of player that I use. Similarly, if I buy digital music or video from one store why should I have to buy it again if I change to another system?

One of the reasons that iTunes is so popular is that it’s very easy to use – the purchase process is streamlined and the synchronisation is seamless. It also locks consumers into one platform and restricts choice. Microsoft’s DRM schemes do the same. And obtaining pirated content on the Internet requires a level of technical knowledge not possessed by many.

If an open standard for DRM could be created, compatible with both FairPlay and Windows Media (PlaysForSure and Zune), it would allow content owners to retain control over their intellectual property without restricting consumer choice.

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

Security – Why the banks just don’t get IT

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, I read a column in the IT trade press about my bank’s botched attempt to upgrade their website security and I realised that it’s not just me who thinks banks have got it all wrong…

You see, the banks are caught in a dilemma between providing convenient access for their customers and keeping it secure. That sounds reasonable enough until you consider that most casual Internet users are not too hot on security and so the banks have to dumb it down a bit.

Frankly, it amazes me that information like my mother’s maiden name, my date of birth, and the town where I was born are used for “security” – they are all publicly available details and if someone wanted to spoof my identity it would be pretty easy to get hold of them all!

But my bank is not alone in overdressing their (rather basic) security – one of their competitors recently “made some enhancements to [their] login process, ensuring [my] money is even safer”, resulting in what I can only describe as an unmitigated user experience nightmare.

First I have to remember a customer number (which can at least be stored in a cookie – not advisable on a shared-user PC) and, bizarrely, my last name (in case the customer number doesn’t uniquely identify me?). After supplying those details correctly, I’m presented with a screen similar to the one shown below:

Screenshot of ING Direct login screen

So what’s wrong with that? Well, for starters, I haven’t a clue what the last three digits of my oldest open account are so that anti-phishing question doesn’t work. Then, to avoid keystroke loggers, I have to click on the key pad buttons to enter the PIN and memorable date. That would be fair enough except that they are not in a logical order and they move around at every attempt to log in. This is more like an IQ test than a security screen (although the bank describes it as “simple”)!

I could continue with the anecdotal user experience disasters but I think I’ve probably got my point across by now. Paradoxically, the answer is quite simple and in daily use by many commercial organisations. Whilst banks are sticking with single factor (something you know) login credentials for their customers, companies often use multiple factor authentication for secure remote access by employees. I have a login ID and a token which generates a seemingly random (actually highly mathematical) 6 digit number that I combine with a PIN to access my company network. It’s easy and all it needs is knowledge of the website URL, my login ID and PIN (things that I know), together with physical access to my security token (something I have). For me, those things are easy to remember but for someone else to guess – practically impossible.

I suspect the reason that the banks have stuck with their security theatre is down to cost. So, would someone please remind me, how many billions did the UK high-street banks make in profit last year? And how much money is lost in identity theft every day? A few pounds for a token doesn’t seem too expensive to me. Failing that, why not make card readers a condition of access to online banking and use the Chip and PIN system with our bank cards?

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]