Windows service pack roadmap

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those of us whose history goes back to Windows NT remember when a service pack was exactly what its name suggests – no new features, just bug fixes, thoroughly tested (usually) – and when application of the latest service pack was no big deal (application of any other updates was not normally required, unless addressing a specific issues). Today the landscape is different, with irregular service packs often bringing major operating system changes and requiring extensive testing, and frequent updates issued on the second Tuesday of almost every month.

A couple of weeks ago I was at a Microsoft event where one of the presenters (Microsoft UK’s James O’Neill) suggested that service packs are irrelevant and that they actually serve to put some people off deploying new operating system releases. To be fair to James, he was specifically talking about the “don’t deploy until the first service pack has shipped” doubters and to some extent he is right – the many updates that are applied to a Windows Vista installation today have provided numerous incremental improvements to the operating system since the original RTM last year. Even so, I can’t help thinking that Microsoft has muddied the water to some extent – I always understood that service packs had a higher level of integration testing than individual updates but it seems the current Microsoft advice is to apply all applicable “patch Tuesday” updates but only to apply other hotfixes (those updates produced to patch a specific customer scenario) where they are absolutely necessary.

Regardless of this confusion around the different forms of update, service packs are not dead – far from it – with both Windows Vista SP1 and Windows XP SP3 in beta at the time of writing. Although largely update rollups, these service packs do introduce some new features (new networking features for XP, and a kernel change for Vista to bring it in line with Windows Server 2008) but I’ve been of the opinion for some time now that XP SP3 is long overdue.

Going forward, it’s interesting to note that Windows Server 2008 is expected to launch with SP1 included. If that sounds odd, remember that both Windows Vista and Windows Server 2008 were originally both codenamed Longhorn and that they are very closely related – it’s anticipated that the next Windows service pack (let’s call it SP2 for the sake of argument) will be equally applicable to both the client and server operating system releases.

ROI, TCO and other TLAs that the bean counters use

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days ago I posted a blog entry about Microsoft infrastructure optimisation and I’ve since received confirmation of having been awarded ValueExpert certification as a result of the training I received on the Alinean toolset from Alinean’s CEO and recognised ROI expert Tom Pisello.

One of the things that I’ve always found challenging for IT infrastructure projects has been the ability to demonstrate a positive return on investment (ROI) – most notably when I was employed specifically to design and manage the implementation of a European infrastructure refresh for a major fashion design, marketing and retail organisation and then spent several months waiting for the go-ahead to actually do something as the CIO (based in the States) was unconvinced of the need to spend that sort of money in Europe. I have, on occasion, used a house-building analogy whereby a solid IT infrastructure can be compared to the plumbing, or the electrical system – no-one would consider building a house without these elements (at least not in the developed world) and similarly it is necessary to spend money on IT infrastructure from time to time. Sadly that analogy isn’t as convincing as I’d like, and for a long while the buzzword has been “total cost of ownership” (TCO). Unfortunately this has been misused (or oversold) by many vendors and those who control the finances are wary of TCO arguments. With even ROI sometimes regarded as a faulty measurement instrument, it’s interesting to look at some of the alternatives (although it should be noted that I’m not an economist).

Looking first at ROI:

ROI = (benefits – investments)/investments x 100%

Basically, ROI is looking to see that the project will return a greater financial benefit than is invested in order to carry it out. On the face of it, ROI clearly describes the return that can be expected from a project relative to the cost of investment. That’s all well and good but for IT projects the benefits may vary from year to year, may be intangible, or (more likely) will be realised by individual business units using the IT, rather than by the IT department itself – they may not even be attributed to the IT project. Furthermore, because all the benefits and costs are cumulative over the period of analysis, ROI fails to take into account the value of money (what would happen if the same amount was invested elsewhere) or reflect the size of the required investment.

To try and address the issue of the value of money, net present value (NPV) can be used – a figure used to determine what money spent today would be worth in future, given a defined interest rate. The NPV formula is a complex one as it takes into account the cost of money now, next year, the year after and so on for the entire period of analysis:

NPV = I0 + (I1/(1+r)) + (I2/(1+r)2) + … + (In/(1+r)n)

(where I is the net benefit for each year (cash flow) and r is the cost of capital – known as the discount rate).

The theory is that future benefits are worth less than they would be today (as the value of money will have fallen). Furthermore, the discount rate may be adjusted to reflect risk (risk-adjusted discount rate). NPV will give a true representation of savings over the period of analysis and will recognise (and penalise) projects with large up-front investments but it doesn’t highlight how long it will take to achieve a positive cash flow and is only concerned with the eventual savings, rather than the ratio of investment to savings.

That’s where risk-adjusted ROI comes in:

Risk-adjusted ROI = NPV (benefits – investments) / NPV (investments).

By adjusting the ROI to reflect the value of money, a much more realistic (if less impressive) figure is provided for the real return that a project can be expected to realise; however, just like ROI, it fails to highlight the size of the required investment. Risk-adjusted ROI does demonstrate that some thought has been put into the financial viability of a project and as a consequence it can aid with establishing credibility.

Internal rate of return (IRR) is the discount rate used in NPV calculations in order to drive the NPV formula to zero. If that sounds like financial gobbledegook to you (as it does me), then think of it as the value that an investment would need to generate in order to be equivalent to an alternative investment (e.g. keeping the money in the bank). Effectively, IRR is about opportunity cost and whilst it is not widely used, a CFO may use it as a means of comparing project returns although it neither indicates the level of up-front investment required nor the financial value of any return . Most critically (for me), it’s a fairly crude measure that fails to take into account strategic vision or any dependencies between projects (e.g. if the green light is given to a project that relies on the existence of some infrastructure with a less impressive IRR).

Finally, the payback period. This one’s simple – it’s a measure of the amount of time that is taken for the cumulative benefits in a project to outweigh the cumulative investments (i.e. to become cash flow positive) – and because it’s so simple, it’s often one of the first measures to be considered. Sadly it is flawed as it fails to recognise the situation where a project initially appears to be cash flow positive but then falls into the red due to later investments (such as an equipment refresh). It also focuses on fast payback rather than strategic investments.

As can be seen, none of these measures are perfect but each organisation will have it’s preferred method of measuring the success (or failure) of an IT project in financial terms. Indeed, each measure may be useful at a different level in an organisation – an IT manager focused on annual budgets may not be concerned with the cost of money but will want a fast payback and an impressive ROI whereas the CIO may be more concerned with the risk-adjusted ROI and the CFO may only consider IRR. It’s worth doing some homework on the hurdle rates that exist within an organisation and, whilst I may not be an expert in financial management, sometimes it doesn’t hurt to understand the language that the bean counters use.

Secure online backup from Mozy

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Mozy logoA few weeks back I was discussing backups with a couple of my colleagues. I’ve commented before that, despite nearly losing all of my digital photos and my entire iTunes library, I’m really bad at backing up my data (it’s spread across a load of PCs and I almost never have it in a consistent enough state to back up). I had thought that Windows Home Server would be my saviour, but Microsoft rejected my feedback requests for Mac and Linux client support so that won’t work for me. Besides which, I should really keep an offsite copy of the really important stuff. One of my colleagues suggested that I joined in a scheme he was setting up with remote transfers between friends (effectively a peer-to-peer network backup) but then another recommended Mozy.

Mozy? What’s that?

For those who haven’t heard of it (I hadn’t, but it does seem to be pretty well known), Mozy is an offline backup service from Berkeley Data Systems (who were purchased by EMC last week). Available as a free service with 2GB of storage (and an extra 256MB per referral – for both the referrer and the new customer – my referral code is L8FPFL if anyone would like to help out…), as a paid service with unlimited storage for $4.95 a month (per computer), or as a professional service for $3.95 a month (per computer) plus $0.50 per GB of storage, it seems there’s an option for everyone – although it is worth understanding the differences between Mozy Home and Mozy Pro.

With clients support for most Windows versions (including Vista) and Mac OS X (still in beta), data is protected by the Mozy client application using 448-bit Blowfish encryption (with a user-supplied key or one from Mozy) and then transferred to the Mozy servers over an HTTPS connection using 128-bit SSL. Upload speeds are not fast on my ADSL connection and there is some impact on performance but I can still use the web whilst I’m uploading in the background (in fact I have a a backup taking place as I’m writing this post). Also, once the first backup has taken place, Mozy only copies changed blocks so subsequent backups should be faster. The only problem that I found (with the Mac client – I haven’t tried on Windows yet) was that it uses Spotlight searches when presenting backup sets so if you have recently had a big clearout (as I did before backing up), the size of each backup set may be out of date (Apple support document 301562 offers some advice to force Spotlight to re-index a folder).

I should highlight that backup is only half the story – the Mozy client has a simple interface for browsing files and selecting those that need to be restored. There’s also a web interface with browsing based either on files or on backup sets and the Mozy FAQ suggests that Mozy can ship data for restoration using DVDs if required (for a fee).

Whilst Mozy has received almost universal acclaim, not everyone likes it. For me it’s perfect – an offline copy of my data but it doesn’t do versioning and it will assume that if I delete a file then after 30 days I won’t want it back. I think that’s fair enough – if I have a catastrophic failure I generally know about it and can restore the files within that month. As for versioning, why not have a local backup with whatever controls are considered necessary and use Mozy as the next tier in the backup model? The final criticism is about Mozy’s potential to access files – that’s purely down to the choice of key. Personally, I’m happy with the idea that they can (in theory) see the pictures of my kids and browse the music/videos in my library – and if I wasn’t, then I could always use my own private key to encrypt the data.

I’m pretty sure that I’ll be moving to the paid MozyHome product soon but I wanted to try things out using MozyFree. Based on what I’ve seen so far, using Mozy will be money well spent.

Controlling Virtual Server 2005 R2 using Windows PowerShell

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my jobs involves looking after a number of demonstration and line of business servers running (mostly) on Virtual Server 2005 R2. Because I’m physically located around 90 miles away from the servers and I have no time allocated to managing the infrastructure, I need to automate as much as possible – which means scripting. The problem is that my scripting abilities are best described as basic. I can write batch files and I can hack around with other people’s scripts – that’s about it – but I did attend a Windows PowerShell Fundamentals course a few weeks back and really enjoyed it, so I decided to write some PowerShell scripts to help out.

Virtual Server 2005 R2 has a Component Object Model (COM) API for programmatic control and monitoring of the environment (this is what the Virtual Server Administration web interface is built upon). For a quick introduction to this API, Microsoft has an on-demand webcast (recorded in December 2004), where Robert Larson explained using the Virtual Server COM API to create scripts to automate tasks like virtual machine (VM) creation, configuration, enumeration, and provisioning VMs.

The Virtual Server COM API has 42 interfaces and hundreds of calls; however the two key interfaces are for virtual machines (IVMVirtualMachine) and the Virtual Server service (IVMVirtualServer). Further details can be found in the Programmers Guide (which is supplied with Virtual Server) and there is a script repository for Virtual Server available on the Microsoft TechNet website.

Because the scripting model is based on COM, developers are not tied to a specific scripting language. This means that, theoretically, Windows PowerShell can be used to access the COM API (although in practice, my PowerShell scripts for Virtual Server are very similar to their VBScript equivalents).

Every script using the Virtual Server COM API needs to initiate the VirtualServer.Application object. For Visual Basic, this would mean calling:

Set objVS=CreateObject("VirtualServer.Application)

Because I want to use PowerShell, I have to do something similar; however there is a complication – as Ben Armstrong explains in his post on controlling Virtual Server through PowerShell, PowerShell is a Microsoft.NET application and as such does not have sufficient priviledges to communicate with the Virtual Server COM interfaces. There is a workaround though:

  1. Compile the C# code that Ben supplies on his blog to produce a dynamic link library (.DLL) that can be used to impersonate the COM security on the required object (I initially had some trouble with this but everything was fine once I located the compiler). I placed the resulting VSWrapperForPSH.dll file in %userprofile%\Documents\WindowsPowerShell\
  2. Load the DLL into PowerShell using [System.Reflection.Assembly]::loadfrom("%userprofile%\Documents\WindowsPowerShell\VSWrapperForPSH.dll") > $null (I do this in my %userprofile%\Documents\WindowsPowerShell\profile.ps1 file as Ben suggests in his follow-up post on PowerShell tweaks for controlling Virtual Server).
  3. After creating each object using the Virtual Server COM API (e.g. $vs=New-Object –com VirtualServer.Application –Strict), set the security on the object with [Microsoft.VirtualServer.Interop.PowerShell]::SetSecurity($vs). Again, following Ben Armstrong’s advice, I do this with a PowerShell script called Set-Security.ps1 which contains the following code:


    Param($object)
    [Microsoft.VirtualServer.Interop.PowerShell]::SetSecurity($object)

    Then, each time I create a new object I call set-security($objectname)

Having got the basics in place, it’s fairly straightforward to manipulate the COM objects in PowerShell and I followed Ben’s examples for listing registered VMs on a given host, querying guest operating system information and examining .VHD files. I then spent quite a lot of time writing a script which will output all the information on a given virtual machine but although it was an interesting exercise, I’m not convinced it has much value. What I did learn was that:

  • Piping objects through Get-Member is can be useful for understanding the available methods and properties.
  • Where a collection is returned (e.g. the NetworkAdapters property on a virtual machine object), individual items within the collection can be accessed with .item($item) and a count of the number of items within a collection can be obtained with .count, for example:

    Param([String]$vmname)

    $vs=New-Object -com VirtualServer.Application -strict
    $result=Set-Security($vs)

    $vm=$vs.FindVirtualMachine($vmname)
    $result=Set-Security($vm)

    $dvdromdrives=$vm.DVDROMDrives
    $result=Set-Security($dvdromdrives)
    Write-Host $vm.Name "has" $dvdromdrives.count "CD/DVD-ROM drives"

Of course, System Center Virtual Machine Manager (SCVMM) includes it’s own PowerShell extensions and therefore makes all of this work totally unnecessary but at least it’s an option for those who are unwilling or unable to spend extra money on SCVMM.

Compiling C# code without access to Visual Studio

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not a developer and as such I don’t have a copy of Visual Studio but this evening I needed to compile somebody else’s C# code to produce a dynamic link library (DLL) and call it from a Windows PowerShell script. Somewhere back in my distant past I recall using Turbo Pascal, Borland C++, early versions of Visual Basic and even Modula-2 to make/link/compile executables but I’ve never used a modern compiled language (even on Linux I avoid rolling my own code and opt for RPM-based installations). So I downloaded and installed Visual C# 2005 Express Edition (plus service pack 1, plus hotfix to make it run on Windows Vista).

Sadly that didn’t get me anywhere – I’m totally confused in the Visual Studio IDE and anyway, the instructions I had told me to access the Visual Studio command prompt and run csc /t:library filename.cs.

It turns out that the Visual Studio Express Editions don’t include the Visual Studio command prompt but in any case, the C# compiler (csc.exe) is not part of Visual Studio but comes with the Microsoft.NET framework (on my system it is available at %systemroot%\Microsoft.NET\Framework\v2.0.50727\). Once I discovered the whereabouts of the compiler, compiling the code was a straightforward operation.

As for what I did with the DLL and PowerShell, I’ll save that for another post.

Reflecting on Byte Night 2007

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Byte Night
NCH

A few weeks ago I posted a blog entry about my involvement in Byte Night 2007 – the UK IT industry’s annual sleep out in support of young people who are coping with life after care or facing homelessness. I’ve just got home from my night sleeping rough in London and as I’ve been really amazed by the generosity of some people, including those of you who don’t know me personally but who added a donation as a way of supporting this site, I thought I’d post an update.

So was it all worth it? Yes.

On a financial level, I’ve raised £1062.26 for NCH, the children’s charity – thank you to everyone who contributed to this fantastic total. Combined with the rest of Team Fujitsu that’s £7266.19 and I understand the total raised by all participants was around the £280,000 mark [update: £304,389 as at 12 October 2007].

Mark Wilson sleeping rough at Byte Night 2007On a personal level, I found it all rather humbling. I want to stress that I can never truly understand what it’s like to be homeless. I’m back home now with my family and tonight I will sleep in my own bed but it’s good to remember those who are less fortunate and to do something to help. Some people inferred that Byte Night is just a “jolly”, a chance to network, and I will admit that it was great fun to attend a charity reception last night hosted by Jenny Agutter (who joined in the sleepover, as she has on several previous occasions); to have a weather forecast from Siân Lloyd (luckily, it was dry, with some cloud cover, and hence well above freezing); to watch Trinny Woodall judge the Byte Night nightcaps that had been customised by celebrities including Dame Ellen MacArthur and Sandi Toksvig. As I walked along the South Bank and through Westminster in the early hours this morning, I saw those who were genuinely homeless and realised how vulnerable I felt. I cannot imagine what it is like to be in their situation, every day and night, let alone as a child. Even though I was taking part in an organised sleepover with over 250 like-minded people the point is not really about being truly “on the streets” – it’s about raising awareness of this important issue and funds for NCH to help vulnerable children and young people.

So what’s this got to do with a technology blog? Not a lot, except that the Byte Night participants all work in the UK IT sector and several IT companies (sadly not the one that I work for) added their support the event, whether that was by providing a fleece to keep me warm (thank you Dell), groundsheets/survival bags and umbrellas (thank you Harvey Nash), or hosting the reception (thank you Ernst and Young).

That’s (almost) all I have to say on this – technology-focussed blogging will resume shortly – but before I sign off, it’s not too late to help me reach the elusive £2000 personal target – my fundraising site will remain in place at http://www.justgiving.com/markwilson-bytenight07/ until 6 December 2007 – and, if you want to know a little more of what it’s all about, watch the video below.

Some more about virtualisation with Xen (including installation on RHEL5)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I got a little confused by what at first appeared to be conflicting information in the XenSource demonstration that I saw earlier this week and the Xen module in my Red Hat Enterprise Linux (RHEL) training. It seems that I hadn’t fully grasped the distinction between Xen as commercialised by Red Hat and Xen as commercialised by XenSource and in this post I shall attempt to clarify the situation.

Somewhat confusingly, the version 4 XenSource products include version 3.1 of the Xen hypervisor. I’d assumed that this was pretty much identical to the Xen 3.0.3 kernel that I can install from the RHEL DVD but it seems not. Roger Baskerville, XenSource’s Channel Director for EMEA explained to me that it’s important to differentiate between the OSS Xen and the Xen Source commercial products and that whilst both Red Hat and XenSource use a snapshot of OSS Xen 3.x.x, the XenSource snapshot is more recent than the one in RHEL due to the time that it takes to incorporate various open source components into a operating system. Furthermore, XenExpress, XenServer and XenEnterprise are designed for bare-metal deployment with little more than the hypervisor and a minimal domain-0 (a privileged virtual machine used to control the hypervisor) whereas RHEL’s domain-0 is a full operating system.

The XenSource microkernel is based on CentOS (itself a derivative of Red Hat Enterprise Linux) with only those services that are needed for virtualisation along with a proprietary management interface and Windows drivers. Ultimately, both the XenSource and RHEL models include a Xen hypervisor interacting directly with the processor, virtual machines (domain-U) and domain-0 for disc and network traffic. Both use native device drivers from the guest operating system, except in the case of full virtualised VMs (i.e. Windows VMs) in which case the XenSource products use signed proprietary paravirtualised Windows drivers for disk access and network traffic (XenSource Tools).

So when it comes to installation, we have two very different methods – whereas XenSource is a bare-metal installation, installing Xen on RHEL involves a number of RPMs to create the domain 0 environment. This is how it’s done:

Method 1 (the simple way) is to select all of the virtualisation tools during operating system installation. Alternatively, method 2 involves installing individual RPMs. At first I just installed the packages containing xen in their name from the /Server directory on the RHEL installation DVD (kernel-xen-2.6.18-8.el5.i686.rpm, kernel-xen-devel-2.6.18-8.el5.i686.rpm and xen-libs.3.0.3025.el5.i386.rpm) but even after rebooting into the Xen kernel I found that there were no management tools available (e.g. xm). Fortunately, I found a forum post that explained my mistake – I had installed the kernel and userspace libraries but not any of the tools/commands and another post that explains how to install Xen on RHEL:

cd /Server
rpm -Uvh kernel-xen-2.6.18-8.el5.i686.rpm
rpm -Uvh kernel-xen-devel-2.6.18-8.el5.i686.rpm
rpm -Uvh xen-libs-3.0.3-25.el5.i386.rpm
rpm -Uvh bridge-utils-1.1-2.i386.rpm
rpm -Uvh SDL-1.2.10-8.el5.i386.rpm
cd /VT
rpm -Uvh --nodeps libvirt-0.1.8-15.el5.i386.rpm

(--nodeps resolves a cyclic dependency between xen, libvert, libvirt-python and python-virtinst.)

rpm -Uvh libvirt-python-0.1.8-15.el5.i386.rpm
rpm -Uvh python-virtinst-0.99.0-2.el5.noarch.rpm
rpm -Uvh xen-3.0.3-25.el5.i386.rpm

At this point, it should be possible to start the Xen daemon (as long as a reboot onto the Xen kernel has been performed – either from manual selection or by changing the defaults in /boot/grub/menu.lst) using xend start. If the reboot took place after kernel installation but prior to installing all of the tools (as mine did) then chkconfig --list should confirm that xend is set to start automatically and in future it will not be necessary to start the Xen daemon manually. xm list should show that Domain-0 is up and running.

Finally, the Xen Virtual Machine Manager can be installed:

cd /Server
rpm -Uvh gnome-python2-gnomekeyring
cd /VT
rpm -Uvh virt-manager

Having installed Xen on RHEL, I was unable to install any Windows guests because the CPU on my machine doesn’t have Intel-VT or AMD-V extensions. It’s also worth noting that my attempts to install Xen on my notebook PC a few months ago were thwarted as, every time I booted into the Xen kernel, I was greeted with the following error:

KERNEL PANIC: Cannot execute a PAE-enabled kernel on a PAE-less CPU!

It turns out that 400MHz front side bus Pentium M processors do not support physical address extensions (PAE) – including the Pentium M 745 (“Dothan”) CPU that my notebook PC uses – and PAE is one of the pre-requisites for Xen.

Finally, it’s worth noting that my RHEL installation of Xen is running on a 32-bit 1.5GHz Pentium 4 (“Willamette”) CPU whereas the XenSource products require that the CPU supports a 64-bit instruction set. The flags shown with cat /proc/cpuinfo can be a bit cryptic but Todd Allen’s CPUID makes things a little clearer (if not quite as clear as CPU-Z is for Windows users).

Microsoft infrastructure optimisation

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Infrastructure optimisationI don’t normally write about my work on this blog (at least not directly) but this post probably needs a little disclaimer as, a few months ago I started a new assignment working in my employer’s Microsoft Practice and, whilst I’m getting involved in all sorts of exciting stuff, it’s my intention that a large part of this work will involve consultancy engagements to help customers understand the opportunities for optimising their infrastructure. Regardless of my own involvement in this field, I’ve intended to write a little about Microsoft’s infrastructure optimisation (IO) model since I saw Garry Corcoran of Microsoft UK present at the Microsoft Management Summit highlights event back in May… this is a little taster of what IO (specifically Core IO) is about.

Based on the Gartner infrastructure maturity model, the Microsoft infrastructure optimisation model is broken into three areas around which IT and security process is wrapped:

  • Core infrastructure optimisation.
  • Business productivity infrastructure optimisation.
  • Application platform infrastructure optimisation.

Organisations are assessed on a number of capabilities and judged to be at one of four levels (compared with seven in the Gartner model):

  • Basic (“we fight fires” – IT is a cost centre) – an uncoordinated, manual infrastructure, knowledge not captured.
  • Standardised (“we’re gaining control” – IT becomes a more efficient cost centre) – a managed IT infrastructure with limited automation and knowledge capture.
  • Rationalised (IT is a business enabler) – managed and consolidated IT infrastructure with extensive automation, knowledge captured for re-use.
  • Dynamic (IT is a strategic asset) – fully automated management, dynamic resource usage, business-linked SLAs, knowledge capture automated and use automated.

Infrastructure optimisation overview diagramIt’s important to note that an organisation can be at different levels for each capability and that the capability levels should not be viewed as a scorecard – after all, for many organisations, IT supports the business (not the other way around) and basic or standard may well be perfectly adequate but the overall intention is to move from IT as a cost centre to a point where the business value exceeds the cost of investment. For example, Microsoft’s research (carried out by IDC) indicated that by moving from basic to standardised the cost of annual IT labour per PC could be reduced from $1320 to $580 and rationalisation could yield further savings down to $230 per PC per annum. Of course, this needs to be balanced with the investment cost (however that is measured). Indeed, many organisations may not want a dynamic IT infrastructure as this will actually increase their IT spending; however the intention is that the business value returned will far exceed the additional IT costs – the real aim is to improve IT efficiencies, increase agility and to shift the investment mix.

Microsoft and its partners make use of modelling tools from Alinean to deliver infrastructure optimisation services (and new models are being released all the time). Even though this is clearly a Microsoft initiative, Alinean was formed by ex-Gartner staff and the research behind core IO was conducted by IDC and Wipro. Each partner has it’s own service methodology wrapped around the toolset but the basic principles are similar. An assessment is made of where an organisation is currently at and where they want to be. Capability gaps are assessed and further modelling can help in deriving those areas where investment has the potential to yield the greatest business benefit and what will be required in order to deliver such results.

It’s important to note that this is not just a technology exercise – there is a balance to be struck between people, processes and technology. Microsoft has published a series of implementer resource guides to help organisations to make the move from basic to standardised, standardised to rationalised and from rationalised to dynamic.

Links

Core infrastructure self-assessment.
Microsoft infrastructure optimisation journey.

Windows Server Virtualization unwrapped

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, Microsoft released Windows Server 2008 Release Candidate 0 (RC0) to a limited audience and, hidden away in RC0 is an alpha release of Windows Server Virtualization (the two updates to apply from the %systemroot%\wsv folder are numbered 939853 and 929854).

I’ve been limited in what I can write about WSV up to now (although I did write a brief WSV post a few months back); however at yesterday’s event about creating and managing a virtual environment on the Microsoft platform (more on that soon) I heard most of what I’ve been keeping under wraps presented by Microsoft UK’s James O’Neill and Steve Lamb (and a few more snippets on Tuesday from XenSource), meaning that it’s now in the public domain and I can post it here (although I have removed a few of the finer points that are still under NDA):

  • Windows Server Virtualization uses a totally new architecture – it is not just an update to Virtual Server 2005. WSV is Microsoft’s first hypervisor-based virtualisation product where the hypervisor is approximately 1MB in size and is 100% Microsoft code (for reliability and security) – no third party extensions. It is no more than a resource partition in order to provide access to hardware and not opening the hypervisor to third parties provides protection against theoretical hyperjacking attacks such as the blue pill (where a rootkit is installed in the hypervisor and is practically impossible to detect).
  • WSV requires a 64-bit CPU and hardware assisted virtualisation (Intel VT or AMD-V) enabled in the BIOS (often disabled by default).
  • There will also be two methods of installation for WSV:
    • Full installation as a role on Windows Server 2008 (once enabled, a reboot “slides” the hypervisor under the operating system and it becomes virtualised).
    • Server core role for the smallest and most secure footprint (with the advantage of fewer patches to apply).
  • Initial builds require a full installation but WSV will run on Server Core.
  • The first installation becomes the parent, with subsequent VMs acting as children. The parent has elevated permissions. The host/guest relationship no longer applies with the hypervisor model; however if the parent fails, the children will also fail. This may be mitigated by clustering parents and using quick migration to fail children over to another node.
  • Emulated drivers are still available with wide support (440BX chipset, Adaptec SCSI, DEC Ethernet, etc.) but they have a costly performance overhead with multiple calls back and forth between parent and child and context switches from user to kernel mode. WSV also includes a synthetic device driver model with virtual service providers (VSPs) for parents and virtual service clients (VSCs) for children. Synthetic drivers require no emulation and interact directly with hardware assisted virtualisation, providing near-native performance. XenSource drivers for Linux will be compatible with WSV.
  • There will be no USB support – Microsoft see most USB demand for client virtualisation and although USB support may be required for some server functions (e.g. smartcard authentication), this will not be provided in the initial WSV release
  • Microsoft views memory paging to be of limited use and states that over-committing RAM (memory ballooning) is only of practical use in a test and development environment. Furthermore it can actually reduce performance where applications/operating systems attempt to make full use of all available memory and therefore cause excessive paging between physical and virtual RAM. Virtual servers require the same volumes of memory and disk as their physical counterparts.
  • In terms of operating system support, Windows Vista and Server 2008 already support synthetic device driver (with support being added to Windows Server 2003). In response to customer demand, Microsoft has worked with XenSource to provide a platform that will allow both Linux and Windows workloads with near native performance though XenSource’s synthetic device drivers for Linux. Emulation is still available for other operating systems.
  • Virtual Server VMs will run in WSV as the VHD format is unchanged; however virtual machine additions will need to be removed and replaced with ICs (integration components) for synthetic drivers using the integration services setup disk (similar to virtual machine additions, but without emulation) to provide enlightenment for access to the VMbus.
  • Hot addition of resources is not included in the initial WSV release.
  • Live migration will not be included within the first WSV release but quick migration will be. The two technologies are similar but quick migration involves pausing a VM, writing RAM to a shared disk (saving state) and then loading the saved state into RAM on another server and restarting the VM – typically in around 10 seconds – whereas live migration copies the RAM contents between two servers using an iterative process until there are just a few dirty pages left, then briefly pausing the VM, copying the final pages, and restarting on the new host with sub-second downtime.
  • WSV will be released within 180 days of Windows Server 2008.

Looking forward to Windows Server 2008: Part 1 (Server Core and Windows Server Virtualization)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst the first two posts that I wrote for this blog were quite generic, discussing such items as web site security for banks and digital rights management, this time I’m going to take a look at the technology itself – including some of the stuff that excites me right now with Microsoft’s Windows Server System.

Many readers will be familiar with Windows XP or Windows Vista on their desktop but may not be aware that Windows Server operating systems also have a sizable chunk of the small and medium size server market.   This market is set to expand as more enterprises implement virtualisation technologies (running many small servers on one larger system, which may run Windows Server, Linux, or something more specialist like VMware ESX Server).

Like XP and Vista, Windows 2000 Server and Advanced Server (both now defunct), Windows Server 2003 (and R2) and soon Windows Server 2008 have their roots in Windows NT (which itself has a lot in common with LAN Manager).  This is both a blessing and a curse as while the technology has been around for a few years now and is (by and large) rock solid, the need to retain backwards compatibility can also mean that new products struggle to balance security and reliability with legacy code.

Microsoft is often criticised for a perceived lack of system stability in Windows but it’s my experience that a well-managed Windows Server is a solid and reliable platform for business applications.  The key is to treat a Windows Server computer as if it were the corporate mainframe rather than adopting a   personal computer mentality for administration.  This means strict policies controlling the application of software updates and application installation as well as consideration as to which services are really required.

It’s this last point that is most crucial.  By not installing all of the available Windows components and by turning off non-essential services, it’s possible to reduce the attack surface for any would-be hacker.  A reduced attack surface not only means less chance of falling foul of an exploit but it also means less patches to deploy.  It’s with this in mind that Microsoft produced Windows Server Core – an installation option for the forthcoming Windows Server 2008 product (formerly codenamed Longhorn Server).

As the name suggests, Windows Server Core is a version of Windows with just the core operating system components and a selection of server roles available for installation (e.g. Active Directory domain controller, DHCP server, DNS server, web server, etc.).  Server Core doesn’t have a GUI as such and is entirely managed from a command prompt (or remotely using standard Windows management tools).  Even though some graphical utilities can be launched (like Notepad), there is no Start Menu, no Windows Explorer, no web browser and, crucially, a much smaller system footprint.  The idea is that core infrastructure and application servers can be run on a server core computer, either in branch office locations or within the corporate data centre and managed remotely.  And, because of the reduced footprint, system software updates should be less frequent, resulting in improved server uptime (as well as a lower risk of attack by a would-be hacker).

If Server Core is not exciting enough, then Windows Server Virtualization should be.  I mentioned virtualisation earlier and it has certainly become a hot topic this year.  For a while now, the market leader (at least in the enterprise space) has been VMware (and, as Tracey Caldwell noted a few weeks ago, VMware shares have been hot property), with their Player, Workstation, Server and ESX Server products.  Microsoft, Citrix (XenSource) and a number of smaller companies have provided some competition but Microsoft will up the ante with Windows Server Virtualization, which is expected to ship within 180 days of Windows Server 2008.  No longer running as a guest on a host operating system (as the current Microsoft Virtual Server 2005 R2 and VMware Server products do), Windows Server Virtualization will directly compete with VMware ESX Server in the enterprise space, with a totally new architecture including a thin “hypervisor” layer facilitating direct access to virtualisation technology-enabled hardware and allowing near-native performance for many virtual machines on a single physical server.  Whilst Microsoft is targeting the server market with this product (they do not plan to include the features that would be required for a virtual desktop infrastructure, such as USB device support and sound capabilities) it will finally establish Microsoft as a serious player in the virtualisation space (even as the market leader within a couple of years).  Furthermore, Windows Server Virtualization will be available as a supported role on Windows Server Core; allowing for virtual machines to be run on an extremely reliable and secure platform.  From a management perspective there will be a new System Center product – Virtual Machine Manager, allowing for management of virtual machines across a number of Windows servers, including quick migration, templated VM deployment and conversion from physical and other virtual machine formats.

Windows Server Core and Windows Server Virtualization are just two of the major improvements in Windows Server 2008.  Over the coming weeks, I’ll be writing about some of the other new features that can be expected with this major new release.

Windows Server 2008 will be launched on 27 February 2008.  It seems unlikely that it will be available for purchase in stores at that time; however corporate users with volume license agreements should have access to the final code by then.  In the meantime, it’s worth checking out Microsoft’s Windows Server 2008 website and the Windows Server UK User Group.

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]