Using an iPhone for e-mail with Exchange Server

Whilst I’m not trying to suggest that the Apple iPhone is intended for business users (I’d suggest that it’s more of a consumer device and that businesses are wedded to their Blackberries or, more sensibly in my opinion, Windows Mobile devices) it does seem to me that there’s been a lot of talk about how it can’t work with Microsoft Exchange Server – either blaming Apple for not supporting the defacto standard server for corporate e-mail or Microsoft for not being open enough. Well, I’d like to set the record straight – the iPhone does work with Exchange Server (and doesn’t even need the latest version).

My mail server is running Microsoft Exchange Server 2003 SP2 and has nothing unusual about it’s configuration. I have a relatively small number of users on the server, so have a single server for secure Outlook Web Access (OWA, via HTTPS) and Outlook Mobile Access (OMA, via HTTP) and mailbox access (MAPI-RPC for Outlook, IMAP for Apple Mail, WebDAV via OWA for Entourage). I have also enabled HTTP-RPC access (as described by Daniel Petri and Justin Fielding) so that I can use a full Outlook client from outside the firewall.

It’s the IMAP access that’s the critical component of the connection as, whichever configuration is employed, the iPhone uses IMAP for communication with Exchange Server and so two configuration items must be in place:

  • The server must have the IMAP service started.
  • The user’s mailbox must be enabled for IMAP access.

Many organisations will not allow IMAP access to servers, either due to the load that POP/IMAP access places on the server or for reasons of security (IMAP can be secured using SSL, as I have done – Eriq Neale has written a step by step guide on how to do this for Windows Small Business Server 2003 and the process is identical for Exchange Server 2003).

In addition, firewalls must allow access to the Exchange server on the appropriate TCP ports – IMAP defaults to port 143; however secure IMAP uses TCP port 993. SMTP access will also be required (typically on TCP port 25 or 587). Using telnet to test port access for IMAP and SMTPYou can confirm that the ports are open using telnet servername portnumber.

Note that even if the connection between the iPhone and Exchange Server is secure, there are no real device access controls (or remote wipe capabilities) for an iPhone. Eriq Neale also makes the point that e-mail is generally transmitted across the Internet in the clear and so is not a secure method of communication; however it is worth protecting login credentials (if nothing else) by securing the IMAP connection with SSL.

Interestingly, the iPhone has two mail account setup options that could work with Exchange Server and experiences on the ‘net seem to be varied. IMAP should work for any IMAP server; however there is also an Exchange option, which didn’t seem to work for me until I had HTTP-RPC access properly configured on the server. That fits with the iPhone Topic article on connecting the iPhone to Exchange, which indicates that both OWA (WebDAV) and HTTP-RPC are required (these would not be necessary for pure IMAP access).

The final settings on my iPhone are:

Settings – Mail – Accounts – accountname
Exchange Account Information Name displayname
Address username@domainname.tld
Description e.g. Work e-mail
Incoming Mail Server Host Name servername.domainname.tld
User Name username
Password password
Outgoing Mail Server Host Name servername.domainname.tld
User Name username
Password password
Advanced – Mailbox Behaviors Drafts Mailbox Drafts
Sent Mailbox Sent Items
Deleted Mailbox Deleted Items
Advanced – Deleted Messages Remove Never
Advanced – Incoming Settings Use SSL On
Authentication NTLM
IMAP Path Prefix
Server Port 993
Advanced – Outgoing Settings Use SSL On
Authentication NTLM
Server Port 25

(Advanced settings were auto-configured.)

A few more points worth noting:

Hyper-V is the new name for Windows Server Virtualization

Last week I was in Redmond, at a Windows Server 2008 technical conference. Not a word was said about Windows Server 2008 product packaging (except that I think one speaker may have said that the details for the various SKUs were still being worked on). Well, it’s amazing how things can change in a few days, as one of the big announcements at this week’s TechEd IT Forum 2007 in Barcelona is the Windows Server 2008 product pricing, packaging and licensing. I don’t normally cover “news” here – there are others who do a much better job of that than I would – but I am interested in the new Hyper-V announcement.

Hyper-V is the new name for the product codenamed Viridian, also known as Windows Server Virtualization, and expected to ship within 180 days of Windows Server 2008. Interestingly, as well as the SKUs that were expected for web, standard, enterprise, datacenter and Itanium editions of Windows Server 2008, there will be versions of Windows Server 2008 standard, enterprise and datacenter editions without the Hyper-V technology (Hyper-V will only be available for x64 versions of Windows Server 2008) as well as a separate SKU for Hyper-V priced at just $28.

$28 sounds remarkably low – why not just make it free (and greatly simplify the product model)? In any case, this places Hyper-V in a great position to compete on price with Citrix Xen Server or VMware ESX Server 3i (it should be noted that I have yet to see pricing announced for VMware Server 3i) – I’ve already written that I think Hyper-V has the potential to compete on technical merit (something that its predecessor, Virtual Server 2005 R2, couldn’t).

At the same time, Microsoft announced a Windows Server Virtualisation validation programme – designed to validate Windows Server with virtualisation software and enable Microsoft to offer co-operative technical support to customers running Windows Server on validated, non-Windows server virtualisation software platforms (such as Xen) as well as virtualisation solution accelerators and general availability of System Center Virtual Machine Manager 2007.

Whilst VNU are reporting that VMware are “unfazed” by the Microsoft Hyper-V announcement, I have absolutely no doubt that Microsoft is serious about making a name for itself in the x86/x64 server virtualisation market.

The great iPhone insurance swindle

A few days ago, I wrote about my purchase of the latest consumer gadget – the Apple iPhone.

Unlike many others, I didn’t queue and the transaction was smooth but I was concerned that I was mis-sold insurance for the device. The conversation went something like this (I didn’t record the exact words at the time – but I wish I had):

O2 sales representative – let’s call her Emma (because, according to my sales receipt, that was her name): “Would you like any insurance for your iPhone? It’s only £7.50 a month and covers you for theft, accidental loss or damage not covered by the warranty but it’s only available at the time of purchase – not afterwards – so you would need to take it out now.”

Me: “No thanks – I know I’d be committed to the contract but even so that that’s a lot of money over 18 months. I’ll take the risk of another £269 to replace the iPhone.”

Emma: “Are you sure, because it wouldn’t be £269 – it’s more like £600 for a new iPhone from O2?”

Me: “How can that be – the handset isn’t subsidised, so I should only need to pay for a new handset at the normal retail price?”

Emma: “We don’t make the rules… that’s Apple.”

I subsequently agreed to buy the insurance, after checking that I could cancel at any time.

Yesterday, I asked about insurance in a Carphone Warehouse store and was given a similar response. I also asked in an Apple Store and was told that they thought it was just the cost of a new iPhone but that I’d need to check with O2.

Hmm… I smell a rat here. Especially when the O2 website says that:

“[…]insurance must be purchased within 28 days of activating your iPhone account with O2.”

So, not at the time of purchase then.

If got even worse when I read a PC Pro article about iPhone first impressions, from which I quote:

[in respect of] “O2 pushing £7.50/month insurance, to cover the situation that in the case of a lost iPhone, O2 will require the unlucky punter to buy a new phone and undertake a second contract”


We checked with O2 this morning and, unbelievably, this is true. If you lose your iPhone without insurance, then you will have to splash out on a new handset, and take out a new contract, paying two monthly tariffs at once. Now that is a costly mistake.


O2 has changed tack this morning, and is now claiming that customers won’t have to pay for two contracts at once, but they will have to source an iPhone on their own.

Now, the exact wording in the terms of service (under “Ending the agreement”) is:

“8.3 If this Agreement is ended during the Minimum Period, you may be required to pay us the monthly subscription charges up to the end of that Minimum Period. This does not apply if you end the Agreement for the reasons in paragraph 8.4 or if you purchase a new iPhone from us, but in this case you agree that a new Minimum Period will apply.

8.4 You may end this Agreement by giving us written notice if:

(a) we break this Agreement in any material way and we do not correct the situation within 7 days of receipt of your written request;

(b) we go into liquidation or a Receiver is appointed over our assets; or

(c) we increase charges for calls, messages or data that form part of your inclusive allowance or your Line Rental Charges, or change this Agreement to your significant disadvantage, in accordance with paragraph 9.2 of the General Terms, provided you give us a minimum of 30 days’ written notice (and provided you notify us within one month of our telling you about the changes). This does not apply where the increase or change relates solely to Additional Services in which case you may cancel, or stop using, that Additional Service.”

[Emphasis added by the author for clarity]

I’m no lawyer (so please don’t interpret anything written here as legal advice) but that sounds like I can just buy a new iPhone (from O2) and connect it to the account whereby a new 18 month contract will start but, crucially, there is no mention of the price of the replacement.

After spending much of the day responding on the Apple discussion forums (and not having received a response to my online query via the O2 website), I called O2’s customer service department on 08705860860. After a 20 minute discussion, I got confirmation that:

  1. A replacement handset would be available at the current recommended retail price of the iPhone.
  2. The original contract would be ended if a new iPhone was purchased; however a new 18 month contract period would commence.

The exact text of the response I received from O2 was:

“Hello Mark,

As per our discussion today. If you were to purchase a replacement handset you would pay the Recommended Retail Price for the replacement (as of the 13/11/07 it is £269, this price is subject to change). However, please be assure that you will only have to pay the same price as any new customer and would not be required to pay a premium due to the lose [sic].

The only concession you would need to make is that your contract would have to start again from the time of purchasing the replacement please see terms and conditions (relevant section follows).

“8.3 If this Agreement is ended during the Minimum Period, you may be required to pay us the monthly subscription charges up to the end of that Minimum Period. This does not apply if you end the Agreement for the reasons in paragraph 8.4 or if you purchase a new iPhone from us, but in this case you agree that a new Minimum Period will apply.”


Kind regards and enjoy your I-phone [sic],

[Name removed to protect the O2 employee’s privacy]
O2 Customer Service.”

That sounds perfectly fair to me, so why are the iPhone retailers pushing insurance on people who probably don’t need it? Sure, £269 is a lot to stump up if you lose your phone but it’s a big difference from the £600 that I was quoted for a new handset and £135 is not a small amount for insurance that I probably don’t need (chances are my household contents insurance covers me – albeit with a large excess). It seems to me that O2 are preying on consumers’ insecurities (and Carphone Warehouse seem to be even worse, based on the contents of an Apple forum thread).

I’m surprised that Apple would risk their strong brand dealing with companies that operate in this manner (I guess that’s what happens when you deal with the Devil – i.e. pretty much any telecommunications company) but I’m now seeking confirmation that my insurance has been cancelled without charge (I believe that UK law gives me 14 days to cool off from any insurance policy and I’ve yet to receive any written details of the cover) as well as a goodwill credit on my O2 account to cover me for the worry and inconvenience that this has caused. I’ll post an update if there’s any significant news on this…

Managing simultaneous access to resources from both internal and external DNS namespaces

When I originally set up my Active Directory, I used an internal DNS namespace, with a .local TLD (as was the advice at the time – no longer recommended). Essentially, my external domains are managed by my hosting providers and I manage the internal namespace. Simple.

Then I set up a few Internet-facing resources at home. I decided to create a secondary forest using a subdomain of my main external DNS namespace so that:

  • domain.local was the AD-integrated DNS for internal (private) resources.
  • domain.tld was managed by my hosting provider for external resources.
  • subdomain.domain.tld was the AD-integrated DNS for Internet-facing resources under my control.

I also added a forwarding rule on the DNS server to send requests for subdomain.domain.tld to the authoritative DNS server for the domain (under my control) but to send requests for domain.tld and all other domains to the ISP’s DNS servers.

That worked well but, because my mail server is known by two different names internally and externally (mailserver.domain.tld for external access and mailserver.subdomain.domain.tld for internal access) and these actually resolve to the same physical server, I get certificate errors when using the internal name. Furthermore, I’m unable to access the server from inside my firewall using the external name, because the mailserver.domain.tld name actually resolves to the IP address of my router, from where which IP filtering and NAT forwarding rules allow the packets to be forwarded to the mail server.

I needed mail clients to work with the same server name (mailserver.domain.tld) whether they were accessing the server on the internal or external networks, so I made some changes:

  1. My hosting provider sent me a copy of the DNS zone file for mailserver.domain.tld and I imported this to my internal DNS server.
  2. Next, I deleted the forwarding rule for mailserver.domain.tld (leaving the one for subdomain.domain.tld in place).
  3. Then, I edited the entries for the servers that needed to be accessed with the same name internally and externally so that instead of resolving to the external IP address of my router, they resolved to the actual IP address of the server (which uses an RFC 1918 internal IP address range).
  4. Finally, nslookup helped me to confirm that the addresses were resolving correctly on the internal and external networks – effectively getting one set of results in the Internet from my hosting provider and another set on the internal network from my DNS server.

The new setup looks like this (note that the IP addresses have been changed to protect the innocent):

Managing internal and external DNS lookups to the same resource

Now I can seamlessly access my mail server using the same DNS name (mailserver.domain.tld) from wherever I roam to.

New Apple keyboard

Earlier today, I dropped into the Apple Store in Solihull to pick up a case to protect my iPhone and, as is so often the case with me and computer hardware, I ended up buying something else. Not the 17″ MacBook Pro that I’m still seriously tempted by but the new Apple chiclet keyboard.

Apple Keyboard (top view)It’s very “Logan’s Run” (well, 1970s Sci-Fi anyway) but also incredibly comfortable to use, with none of the “stickiness” of the keys on my previous Apple keyboard. If I had one criticism, it would be that the return key on the UK keyboard is a little skinny (the graphic here shows the US version) but nevertheless this was definitely a worthwhile upgrade. Now it only they’d do something about that damned “mighty” mouse

Say hello to iPhone

Apple iPhone (UK model)So I did it, I bought an iPhone – two in fact: one whilst I was in the States earlier this week, which I then returned because once the sales tax was added it was nearly as expensive as buying one in the UK and the risk of having a brick on my hands was too great if the AT&T unlock went wrong; the second a few minutes ago from an O2 store close to where I live.

As for all the people who queued up to get one – why? I just waited until the kids were in bed, drove into town and strolled in to the store with no queues whatsoever. One question I do have though… if, as reported, this handset is not subsidised, then why did O2 advise me to buy additional insurance because a replacement unit if this gets lost, stolen or damaged (outside the warranty conditions) will be £600 rather than £269?

Windows Server 2008 Worldwide Technical Workshop

There haven’t been many blog posts on the site this week but it’s been a full-on week in Redmond at at the Windows Server 2008 Worldwide Technical Workshop.

Not many people will write about their experience of attending a conference for the IT industry press (I’m not a journalist – just a blogger) but it’s been great to be labelled as a member of the “worldwide press” (I kid you not) for a few days and I wanted to write something about the experience.

Sign for Building 33It was a long trip out here and I was pretty tired but also very exited about the event – my first visit to the Microsoft Campus. As I waited for the coach that took us from the hotel to the Microsoft Conference Center, I got chatting to Paul Hearns, the editor at ComputerScope (of one of Ireland’s leading IT trade publications) and realised that I was probably one of only a small number of bloggers at the event – with the distinction being that journalists write objective opinion pieces (at least, that’s the idea – PR is a strong influencing factor) and that IT bloggers are often enthusiastic techies, with less focus (but an increasingly wide audience).

The calibre of the other attendees was soon apparent as, the first person I saw after registration was Paul Thurrott (best known for his SuperSite for Windows, WinInfo Updates and the Windows Weekly podcast). Mark Wilson and Paul Thurrott in 2007I hold Paul’s work in high regard as he is one of the few tech writers that I know of who manages to write objectively about both Windows and Macintosh topics (provoking criticism from both sides – often unfounded). I introduced myself (not expecting Paul to know my work, even though we have exchanged e-mails on occasion) but I’m afraid it’s difficult not to appear a little geeky when you ask someone if they would mind posing for a photo with you.

As the day moved on, I met journalists whose work I was familiar with but whom I only knew from their bios – people like Karen Forster and David Chernicoff – and later I introduced myself to Steven Bink, who constantly amazes me for being able to pump out so many Microsoft news stories from his site.

I also met John Savill, who I always thought was a) American and b) a professional technical writer – it turns out that he’s actually English and, just like me, he has a day job working for a large IT company and writes in his spare time. Also, just like this blog, John’s Windows FAQs started out as being for his own benefit and has become a useful resource for other people. And it turns out that John and I are not alone in this world of part-time IT writers as I hooked up with James Bannan (best known for his work at APC Mag) and Andrew “Dugie” Dugdell.

One thing I was totally unprepared for was the size of the Microsoft Campus. I don’t know the exact size but it must cover at least a square mile, on both sides of I-520. I’m not sure how this compares to the GooglePlex or Apple’s base in Cupertino but certainly puts 5 buildings in Thames Valley Park into context.

Another thing that I found interesting is that there is no building 7… and being sent to a meeting in building 7 is a common prank to play on new employees (thanks to John Howard for providing that little piece of trivia – Scott Guthrie has more trivia about the various buildings on the Microsoft campus).

I’m on my way home now, worn out after 24 sessions in 3 days and yes, I drank the Kool-aid (actually it was Mountain Dew…), picking up a stack of information about Windows Server 2008, as well as meeting some great people. Expect to see plenty of Windows Server 2008 information posted here over the coming weeks.

Off to Redmond

In a few hours time, I’ll be catching a flight to Seattle and then spending the next three days as a guest of Microsoft at the Windows Server 2008 Worldwide Technical Workshop in Redmond. Without wanting to sound like a fanboy (I believe that one of the reasons I was invited is that, in spite of generally being an advocate of Microsoft technologies I’m also critical when they get something wrong – I’d like to hope that the same goes for Apple and the various Linux vendors too), I’m really excited. Not because a suburb of Seattle is top of my list of places to visit (it isn’t) but because I have built my career on implementing Microsoft products and technologies and, even though I work in the Microsoft Practice at a leading IT services company, this invitation has come about in recognition of the work that I put into this blog and am truly honoured to have been invited.

I haven’t dared mention this trip to anyone other than family, close friends and colleagues (just in case something happened that meant I couldn’t go) but I do know that there are several readers of this blog at Microsoft (both in “corp” and in the UK subsidiary) and I’d like to say a big “thank you” to whoever put my name forward.

Creating and managing a virtual environment on the Microsoft platform

Several months back, I blogged about a Microsoft event with a difference – one which, by and large, dropped the PowerPoint deck and scripted demos in favour of a more hands-on approach. That was the Windows Vista after hours event (which I know has been popular and re-run several times) but then, a couple of weeks back, I attended another one at Microsoft’s new offices in London, this time about creating and managing a virtual environment on the Microsoft platform.

Now, before I go any further I should point out that, as I write this in late 2007, I would not normally recommend Microsoft Virtual Server for an enterprise virtualisation deployment and tend to favour VMware Virtual Infrastructure (although the XenSource products are starting to look good too). My reasons for this are all about scalability – Virtual Server is limited in a number of ways, most notably that it doesn’t support multiple-processor virtual machines – it is perfectly suitable for a workgroup/departmental deployment though. Having said that, things are changing – next year we will see Windows Server Virtualisation, the management situation is improving with System Center Virtual Machine Manager (VMM).

…expect Microsoft to make a serious dent in VMware’s x86 virtualisation market dominance over the next couple of years

Throughout the day, Microsoft UK’s James O’Neill and Steve Lamb demonstrated a number of technologies for virtualisation on a Microsoft platform and the first scenario involved setting up a Virtual Server cluster, building the second node from a Windows Deployment Services (WDS) image (more on WDS later…) and using the Microsoft iSCSI target for shared storage (currently only available as part of Windows Storage Server although there is a free alternative called Nimbus MySAN iSCSI Server) together with the Microsoft iSCSI initiator – included within Windows Vista and Server 2008 (and available for download on Windows 2000/XP/Server 2003).

When clustering Virtual Server, it’s important to understand that Microsoft’s step by step guide for Virtual Server 2005 R2 host clustering includes an appendix containing a script (havm.vbs) to add as a cluster resource in order to allow servers to behave well in a virtual cluster. Taking the script offline effectively saves the virtual machine (VM), allowing the cluster group to be moved to a new node and then bringing the script back online will restore the state of the VM.

After demonstrating building Windows Server 2008 Server Core (using WDS) and full Windows Server 2008 (from an .ISO image), James and Steve demonstrated VMM, the System Center component for server consolidation through virtual migration and virtual machine provisioning and configuration. Whilst the current version of VMM only supports Virtual Server 2005 and Windows Server Virtualisation, a future version will also support the management of XenSource and VMware virtual machines, providing a single point of management for all virtual machines, regardless of the platform.

At this point, it’s probably worth looking at the components of a VMM enterprise deployment:

  • The VMM engine server is typically deployed on a dedicated server, and managed from the VMM system console.
  • Each virtual server host has a VMM agent installed for communication with the VMM engine.
  • Library servers can be used to store templates, .ISO images, etc. for building the virtual infarstructure, with optional content replication using distributed file system replication (DFS-R).
  • SQL Server is used for storage of configuration and discover information.
  • VMM uses a job metaphor for management, supporting administration from graphical (administration), web (delegated provisioning), or command line interfaces (the command line interface is through the use of VMM extensions for Windows PowerShell, for which a cmdlet reference is available for download and the GUI interface allows identification of the equivalent PowerShell command).

Furthermore, Windows Remote Management (WinRM/WS-Management) can be used to tunnel virtual machine management through HTTPS, allowing a virtual host to be remotely added to VMM.

VMM is currently available as part of an enterprise server management license; however it will soon be available in workstation edition, priced per physical machine.

The next scenario was based around workload management, migrating virtual machines between hosts (in a controlled manner). One thing that VMM cannot do is dynamically redistribute the workload between virtual server hosts – in fact Microsoft were keen to point out that they do not consider virtualisation technology to be mature enough to make the necessary technical decisions for automatic resource allocation. This is one area where my opinion differs – the Microsoft technology may not yet be mature enough (and many organisations’ IT operations processes may not be mature enough) but ruling out dynamic workload management altogether runs against the idea of creating a dynamic data centre.

It’s worth noting that there are two main methodologies for virtual machine migration:

  1. Quick migration requires shared storage (e.g. in a cluster scenario) with the saving of the VM state, transfer of control to another cluster node, and restoration of the VM on the new node. This necessarily involves some downtime but is fault tolerant with the main considerations being the amount of RAM in the VM and the speed at which this can be written to or read from the disk.
  2. Live migration is more complex (and will not be implemented in the forthcoming release of Windows Server Virtualization), involving copying the contents of the virtual machine’s RAM between two hosts whilst it is running. Downtime should be sub-second; however there is a requirement to schedule such a migration and it does involve copying the contents of the virtual machine’s memory across the network.

Some time ago, I wrote about using the Virtual Server Migration Toolkit (VSMT) to perform a physical to virtual (P2V) conversion. At that time, the deployment technology in use was Automated Deployment Services (ADS) but ADS has now been replaced with Windows Deployment Services (WDS), part of the Windows Automated Installation Kit (AIK). WDS supports imaged deployment using Windows imaging format (.WIM) files for installation and boot images or legacy images (not really images at all, but RIS-style file shares including support for pending devices (prestaged computer accounts based on the machine’s GUID). P2V capabilities are now included within VMM, with a wizard for gathering information about the physical host server, then converting it to a virtual format, including analysis of the most suitable host using a star system for host ratings based on CPU, memory, disk and network availability. At the time of writing, VMM supports a P2V conversion as well as virtual to virtual (V2V) conversion from a running VM (strangely, Microsoft still refer to this as P2V) and V2V file format conversion and optimisation (from competing virtualisation products) but not virtual to physical (V2P) conversion (this may be possible using a Windows Vista System Restore but there would be issues around hardware detection – success is more likely by capturing a virtual machine image in WDS and then deploying that to physical hardware). In addition, VMM supports creating template VMs by cloning a VM that is not currently running and it was also highlighted that removing a VM from VMM will actually delete the virtual machine files – not simply removing them from the VMM console.

The other components in the virtual machine management puzzle are System Center Operations Manager (a management pack is available for server health monitoring and management, performance reporting and analysis, including this ability to monitor both the host server workload and the VMs running on the server), System Center Configuration Manager (for patch management and software upgrades) and System Centre Data Protection Manager (DPM), which allows for virtual machine backup and restoration as well as disaster recovery. DPM builds on Windows’ Volume Shadow Copy (VSS) technology to take snapshots of running applications, with agents available for Exchange Server, SharePoint, SQL Server and Virtual Server. Just like traditional backup agents, the DPM agents can be used within the VMs for granular backups, or each VM can be treated as a “black box”, by running just the Virtual Server agent on the hosts and backing up entire VMs.

The final scenarios were all based around Windows Server Virtualization, including running Virtual Server VMs in a WSV environment. WSV is an extensive topic with a completely new architecture and I’ve wanted to write about it for a while but was prevented from doing so by an NDA. Now that James has taken the wraps off much of what I was keeping quiet about, I’ve written a separate post about WSV.

Finally, a couple of points worth noting:

  • When using WDS to capture an image for deployment to a VM, it’s still necessary to sysprep that machine.
  • Virtualisation is not a “silver bullet” – even though Windows Server Virtualisation on hardware that provides virtualisation assistance will run at near native speeds, Virtual Server 2005 is limited by factors of CPU speed, network and disk access and available memory that can compromise performance. In general, if a server is regularly running at ~60-75% CPU utilisation then it’s probably not a good virtualisation candidate but many servers are running at less than 15% of their potential capacity.

Microsoft’s virtualisation technology has come a long way and I expect Microsoft to make a serious dent in VMware’s x86 virtualisation market dominance over the next couple of years. Watch this space!

A light-hearted look at infrastructure optimisation

I’ve written before about Microsoft infrastructure optimisation (IO), including a link to the online self-assessment tool but I had to laugh when I saw James O’Neill’s post on the subject. I’m sure that James won’t mind me repeating his IO quiz here – basically the more answers from the right side of the table, the more basic the IT operations are and the more from the left side, the more dynamic they are. Not very scientific (far less so than the real analysis tools) and aimed at a lower level than a real IO study but amusing anyway:

The rest of my company… …involves the IT department in their projects. …accepts IT guys have a job to do. …tries to avoid anyone from IT.
My team… …all hold some kind of product certification. …read books on the subject. …struggle to stay informed.
What worries me most in the job is… …fire, flood or other natural disaster. …what an audit might uncover. …being found out.
My department reminds me of… …’Q branch’ from a James Bond movie. …Dilbert’s office. …trench warfare.
Frequent tasks here rely on… …automated processes. …a checklist. …Me.
What I like about this job is… …delivering the on the promise of technology. …it’s indoors and the hours are OK. …I can retire in 30 years.
If asked about Windows Vista I… …can give a run down of how its features would play here. …repeat what the guy in PC World told me. …change the subject.
New software generally is… …an opportunity. …a challenge. …something we ban.
My organization sees “software as a service” as a way to… …do more things. …do the same things, more cheaply. …do the same things without me.
Next year this job will be… …different. …the same. …outsourced.