Microsoft Outlook Web Access (OWA) is great for occasional access to e-mail but if you’re using a non-Microsoft browser (as I often do) then it degrades to a rather sorry state. Consequently, for a couple of years now, I’ve been meaning to get RPC over HTTP (aka. Outlook Anywhere) working so that I can use a full Outlook client to access my Exchange Server mailbox when I’m on the road (iPhone access to Exchange Server via IMAP or Outlook Mobile Access from my Nokia 6021 are useful for checking for messages throughout the day but I need to run the full Outlook client to filter out the junk e-mail). After doing most of the preparation work some time ago, I didn’t get around to testing it fully – mostly because a lot of my access is from behind an authenticated proxy (and I’m told that Outlook doesn’t like anything getting in the way).
Tonight, I’m in a hotel, and the iBahn connection has no such restrictions, so I finally got around to testing the connection, using Outlook 2007 to communicate with an Exchange Server 2003 (SP2) server.
For me, the process was simplified as I already had OWA working over HTTPS but, as Daniel highlights, Harry Bates’ RPCNoFrontEnd utility can save a lot of time in checking that the registry keys are correctly set for the RPC proxy server ports and the Windows Server 2003 resource kit rpccfg /hd command is useful to confirm their operation:
Secondly, running outlook /rpcdiag gave some useful diagnostic information for confirming that the connection was indeed using HTTPS:
Ironically, I’ve finally got this working with Exchange Server 2003 just before I’m about to move my mail over to a new server running Exchange Server 2007!
As a supplement to my previous post on a BDD 2007 overview and Office 2007 customisation and deployment using BDD 2007, this is a rollup of just about everything I could lay my hands on about Vista and Office deployment. It’s not particularly well structured – let’s just call it a “brain dump”. If anyone has anything extra to add, please leave a comment at the end of this post:
Windows imaging technologies
ImageX (imagex.exe) is a command line tool for manipulating Windows Imaging Format (.WIM) files. It is built using the Windows imaging API (WIMGAPI) together with a .WIM file system filter.
Windows Vista images are HAL-independent and make use of single instance storage. To minimise the amount of space used by Windows Vista installation images, use imagex.exe to apply images to separate folders on a computer and then append these images to the final image.
To modify an image, use imagex.exe to mount it and then apply an unattended setup answer file (unattend.xml).
Package Manager (pkgmgr.exe) can be used to update both image files and computers that have already had an image applied:
When used to update computers that have already had an update applied, pkgmgr.exe can install, configure and update features in Windows Vista (e.g. installed components, device drivers, language packs, updates). It can also be used with an unattended installation answer file for new installations.
When adding additional drivers to an existing Windows Vista image, use pkgmgr.exe to add the drivers from a folder.
Windows Vista deployment
Windows Setup (setup.exe) for Vista is now GUI-only and there is no more winnt.exe and winnt32.exe.
Windows installation is structured around a number of configuration passes:
unattend.xml is a single unattended installation answer file, replacing multiple files in previous versions of Windows – including unattend.txt, cmdlines.txt, winbom.ini, oobeinfo.ini and sysprep.inf.
To avoid prompting users for input during the installation of Windows Vista, create an unattended setup installation file and copy this to a USB flash drive, then ensure that the flash drive is present during Windows Vista installation.
unattend.xml must be renamed to autounattend.xml when used on removable media during installation and replaces winnt.sif.
The Out-of-Box Experience (OOBE) is now known as Windows Welcome and is controlled with oobe.xml, which includes options for Windows Welcome, ISP sign-up and the Windows Vista Welcome Center.
Disk repartitioning can be configured in the first pass of the Windows PE section of unattend.xml.
When using multiple hardware configurations, create a distribution point that includes an Out-of-Box Drivers folder.
When using WDS with computers that do not have PXE capabilities, create a WDS discovery image and use this to create a bootable CD for Windows Vista installation.
When using WDS on a server that provides DHCP services, enable DHCP Option 60 and configure WDS to listen on port 67.
If the WDS Image Capture Wizard is unable to capture a reference computer image, restart the reference computer and run sysprep /generalize /oobe.
The Windows Automated Installation Kit (WAIK) replaces deploy.cab and contains updated versions of tools previously provided to OEMs (e.g. Windows PE) for use in corporate deployments.
The OEM Preinstallation Toolkit (OPK) is for system builders, containing the WAIK and additional OEM-specific information (e.g. OEM licensing).
bootsect.exe is used to enable deployment alongside earlier versions of Windows with the Windows Vista boot manager (bootmgr.exe) – it replaces fixfat.exe and fixntfs.exe (both included with Windows Vista). Microsoft knowledge base article 919529 has more details.
The OMPM file scanner (offscan.exe) can be used to identify documents that are not compatible with Office 2007 – a light scan is used to locate Office documents whilst a deep scan can examine documents for potential conversion issues.
Earn enough money to replace my Mac Mini so that it could move to the living room.
Negotiate the wife approval factor for IT in a common area of the house (must look good – hence Mac Mini).
Find a software setup that works for me, but is also consumer-friendly for the rest of the family (i.e. no hint of a Windows, OS X or Linux interface).
Fortunately, the first two items came together for me quite easily – after I decided to raid my savings and buy a MacBook a couple of months back, my wife asked me which PC it was replacing and I said “that one” (pointing at the Mac Mini), never expecting the response that she gave – “Oh. I like that one. It’s cute.”! This was a revelation to me – my wife has never before referred to any of my IT as “cute” – so I grabbed the moment to say something like “yeah, I thought it could go in the living room for when we watch films and stuff” (and the lack of any objection was interpreted as implicit approval).
The hardest part was the software setup. I still feel that Windows Vista’s Media Center capabilities are vastly superior to Apple’s Front Row – there’s TV support in there for starters. There are other options too: EyeTV would add TV support to the Mac (the problem is that it only has a 1 year TV guide subscription in Europe); Center Stage looks promising, but is still an alpha release product (and has been for a while now); Myth TV could work too (but my research suggested that USB TV tuner support could be a bit of a ‘mare). Then I thought about it a bit harder – we have lousy terrestrial TV support in my house (digital or analogue), so the clearest TV signal I have is on satellite (Freesat from Sky). Unless I can find a way to interface the Mac with my digibox, TV on the living room PC is not going to happen (this solution looks interesting but is for Windows/Linux only). Which meant that the criteria for a living room PC were:
Access to my iTunes library (which lives on my MacBook) to watch podcasts, listen to music, etc.
Ability to play content from CD/DVD.
Ability to play content ripped from DVD or obtained by other means (e.g. home movies or legally obtained digital downloads).
Simple (no technical skills required) user interface.
So, ruling out the need for TV integration meant that Apple Front Row was suddenly a contender as the OS X 10.5 (Leopard) version of Front Row can access shared libraries from other PCs (so I don’t have to copy/convert the media) and the remote control supplied with the Mac Mini does not rely on line of sight to control the PC. Vista Media Center could do this but I’d need to have that huge IR receiver and ugly remote control. Of course, I could just buy an Apple TV, but the Mac Mini gives me so much more (and it can output to my aging, but still rather good Sony Trinitron 32″ widescreen TV). I will stress though, that if I ever manage to get a decent TV signal from our aerial, Windows Vista Media Center would beat Front Row hands down.
It’s ironic that I’m writing this in a hotel room in Bolton (nowhere near my living room) but so far, the software stack on the Mini is:
Mac OS X 10.5 (including Front Row, accessible via the supplied remote control).
I may have to add a few more codecs over time (but Perian seems to include most of what I need) and furthermore, this headless PC (sorry, Mac – for the purists out there) is suiting my requirements pretty well. I’ve watched films from the hard disk with no issues at all, and streamed video podcast (and audio) content from my MacBook across an 802.11g Wi-Fi network (with no apparent playback issues – despite the signal having to travel through several walls and to the furthest corner of my living room – although I wish iTunes would mark podcasts as played when viewed remotely). There is one caveat though (and that’s a hardware issue) – even though the Leopard version of Front Row supports DVD playback, I still watch DVD content on my home theatre setup as that gives me true 5:1 surround sound (it’s only stereo on the Mac Mini).
Other applications that will probably find their way onto the mini over time include:
Those are for the future though – at this point in time, I’m ripping my content (and performing iPod conversion as required) on other machines and transferring it to the Mini across the network (see below). I also tried running a Windows XP virtual machine for downloading from BBC iPlayer and Channel 4 On Demand but didn’t manage to set up the tools that are necessary to remove the Windows Media DRM in order to play the content on the Mac (I do at least have a PC running older software releases that I can use for that).
As for getting media onto the living room PC (the stuff that I don’t want to stream across the network), I can always plug in a USB drive and control it remotely using the screen sharing capabilities in OS X.
OS X screen sharing is only VNC but it works well across my network (and scales the display accordingly – it even gave a decent representation of my 1680×1050 resolution display downscaled to fit on a 1280×800 MacBook display, although the picture here is with the Mac Mini hooked up to a standard definition TV). One point to note – it’s necessary to disconnect the shared screen session before trying to control the living room PC with the remote control or else it won’t work.
It’s been a long time coming – almost 4 years – but Microsoft has just announced that Windows XP service pack 3 has been released to manufacturing.
In the announcement, Release Manager, Chris Keroack, said:
“Today we are happy to announce that Windows XP Service Pack 3 (SP3) has released to manufacturing (RTM). Windows XP SP3 bits are now working their way through our manufacturing channels to be available to OEM and Enterprise customers.
We are also in the final stages of preparing for release to the web […] on April 29th, via Windows Update and the Microsoft Download Center. Online documentation for Windows XP SP3, such as Microsoft Knowledge Base articles and the Microsoft TechNet Windows XP TechCenter, will be updated then. For customers who use Windows XP at home, Windows XP SP3 Automatic Update distribution for users at home will begin in early summer.”
The comments make interesting reading – I recommend a read but will warn you that there are 111 of them, so you’d better be good at skim reading!
There are lots of useful analogies there (and the general consensus seems to be that, if a Wi-Fi access point is open, then you are inviting people to come in – especially with most wireless cards configured to connect to the strongest available signal – and that, if it’s secured, then it is clearly a private computer system) but I found a few of them particularly interesting after reading Section 1 of the Computer Misuse Act, 1990 (I’m sure other laws can equally be applied):
Unauthorised access to computer material
(1) A person is guilty of an offence ifâ€”
(a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer;
(b) the access he intends to secure is unauthorised; and
(c) he knows at the time when he causes the computer to perform the function that that is the case.
(2) The intent a person has to have to commit an offence under this section need not be directed atâ€”
(a) any particular program or data;
(b) a program or data of any particular kind; or
(c) a program or data held in any particular computer.
(3) A person guilty of an offence under this section shall be liable on summary conviction to imprisonment for a term not exceeding six months or to a fine not exceeding level 5 on the standard scale or to both.
Based on this it could be argued that, if anaccess point is broadcasting SSIDs and is unencrypted, then a person cannot know that the access that they intend to secure is unauthorised. It could also be argued that, by broadcasting its presence, the access point accessed any computers with wireless cards in the area without their respective owners’ permissions. Or consider, as another commenter highlighted, what happens when pinging a computer’s IP address – is that not requiring the other computer to perform an action (even if that action is to reject ping responses, it still has to read the packet)? What about accessing a web server – did I explicitly give you permission to come here and read this article? No, but by publishing this website, I gave implicit permission, which is expanded further in my legal notice. Ergo, by leaving wireless access point open and broadcasting it’s SSID, I would be giving implicit permission to access it.
I know there’s at least one Copper who reads this blog and I’m sure he has an opinion. As of course, do I. And that’s why I locked down my Wi-Fi.
Usual caveats apply: I am not a lawyer; don’t interpret anything you read here to be legal advice; etc., etc..
Last year, I wrote a post about free Wi-Fi provision in central Milton Keynes. I wasn’t very impressed (although I’d like to see the service prosper) but have to admit that I haven’t tried it since. In the same post, I also mentioned that there was a WiMax trial planned for Milton Keynes and a few weeks back, after hearing nothing for over a year, I received an e-mail to tell me that it is now available in my area.
This sounded good – I have “up to 8Mbps” ADSL at home and my router tells me that I get about 7.2Mbps downstream with about 448Kbps upstream, but if I could get good upstream bandwidth too then that would be an advantage. Then I noticed two things that put me off.
Firstly, the service is provided by Connect MK – who claim to be:
“A Council company created to provide better broadband services for Milton Keynes”
WTF! Milton Keynes Council appears to me to be incapable of managing anything of any substance (of course, that is purely a personal opinion, based on my experience as a Council Tax payer). In the small town where I live (under the control of the unitary authority that is Milton Keynes Council) we have: a secondary school that opened 8 months late and Â£3m over budget [source: political propaganda for the upcoming local elections], with design changes that mean it stands out like a blot on our (pleasant) landscape; a backlog of road repairs; short-sighted planning decisions with councillors supporting further expansion without any of the supporting infrastructure (including the grid road system that has worked so well for the last 30 years in urban Milton Keynes); etc., etc. (my list could go on and on, but let’s stop here – you get the idea). Now the same council wants to provide network infrastructure services. It’s not 1 April is it? Not according to my calendar anyway.
Secondly, the price: a 1Mbps downstream/512Kbps upstream package with a 10GB download limit is advertised for Â£20 a month; 2Mbps down and 512Kbps up with a 20GB allowance is Â£25; but 2Mbps down and 1Mbps up with a 40GB allowance is a staggering Â£50 a month! Are they joking?
As it happens, Connect MK is a reseller for the infrastructure provided by FREEDOM4 (formerly Pipex Communications). Interestingly, despite having supplied my home address and postcode details to Pipex and Connect MK having e-mailed me to say “Great news – You can now receive a WiMAX Broadband Service”, neither the current FREEDOM4 coverage map nor the coverage checker on their website indicates that I can receive the service – at this time it only seems to cover urban areas of Milton Keynes. It’s not a very good indictment of Connect MK’s ability to provide a reliable service when they haven’t even worked out that I live 10 miles outside their coverage area.
Regardless of the network coverage, I fail to see who would even consider the Connect MK WiMax service as an alternative to ADSL or cable. At the prices quoted, I can’t imagine much of Milton Keynes’ population getting connected with Connect MK.
I used to use WSUS to update the machines on my home network but after a botched server upgrade, it all went screwy and I didn’t really want to have to pull all the updates down over my ADSL connection again (would probably blow away my month’s worth of “fair usage”). In any case all I was doing was blindly approving updates for installation so I might as well use the Microsoft Update servers instead.
The only downside of using the Microsoft Update servers to update several computers is that there is a lot of duplication in the network traffic. That’s why ISA Server 2006 includes a cache rule that enables caching of Microsoft updates using the Background Intelligent Transfer Service (BITS). For those who aren’t aware, BITS allows the transfer of large volumes of data without degrading network performance as it transfers the data in small chunks to utilise unused bandwidth as it becomes available and reassembles the data at the destination. The BITS feature is not available for any other ISA cache rule but the Microsoft Update Cache Rule is installed by default and all I needed to do was enable caching.
CACHEDIR.exe – Unable to Locate Component
This application has failed to start because msfpc.DLL was not found. Re-installing the application may fix the problem.
Then I remembered that cachedir.exe needs to be copied to the ISA Server installation folder (on my system that is %programfiles%\Microsoft ISA Server) – after moving the file to the correct folder, it fired up as expected. Just remember that this utility can only display the cache contents that have been written to disk. To flush the memory cache to disk you will need to restart the Microsoft Firewall service and re-run cachedir.exe to view the contents.
Basic inspection using Task Manager showed that neither the virtual nor the physical system was stressed from a memory or CPU perspective but the disk access light was on continuously, suggesting that the application was IO-bound (as might be expected with a database-driven application). As I was also running low on physical disk space, I considered whether moving the VM to an external disk would improve performance.
On the face of it, spreading IO across disk spindles should improve performance but with SATA hard disk interfaces providing a theoretical data transfer rate of 1.5-3.0Gbps and USB 2.0 support at 480Mbps, my external (USB-attached) drive is, on paper at least, likely to result in reduced IO when compared with the internal disk. That’s not the whole story though – once you factor in the consideration that standard notebook hard drives are slow (4200 or 5400RPM), this becomes less of a concern as the theoretical throughput of the disk controller suddenly looks far less attainable (my primary hard drive maxes out at 600Mbps). Then consider that actual hard disk performance under Windows is determined not only by the speed of the drive but also by factors such as the motherboard chipset, UDMA/PIO mode, RAID configuration, CPU speed, RAM size and even the quality of the drivers and it’s far from straightforward.
I decided to take a deeper look into this. I should caveat this with a note that performance testing is not my forte but I armed myself with a couple of utilities that are free for non-commercial use – Disk Thruput Tester (DiskTT.exe) and HD Tune.
Both disks were attached to the same PC, a Fujitsu-Siemens S7210 with a 2.2GHz Intel Mobile Core 2 Duo (Merom) CPU, 4GB RAM and two 2.5″ SATA hard disks but the internal disk was a Western Digital Scorpio WD1200BEVS-22USTO whilst the external was a Fujitsu MHY2120BH in a Freecom ToughDrive enclosure.
My (admittedly basic) testing revealed that although the USB device was a little slower on sequential reads, and quite a bit slower on sequential writes, the random access figure was very similar:
Internal (SATA) disk
External (USB) disk
Testing was performed using a 1024MB file, in 1024 chunks and the cache was flushed after writing. No work was performed on the PC during testing (background processes only). Subsequent re-runs produced similar test results.
Something doesn’t quite stack up here though. My drive is supposed to max out at 600Mbps (not MBps) so I put the strange results down to running a 32-bit application on 64-bit Windows and ran a different test using HD Tune. This gave some interesting results too:
Internal (SATA) disk
External (USB) disk
Minimum transfer rate
Maximum transfer rate
Average transfer rate
Based on these figures, the USB-attached disk is slower than the internal disk but what I found interesting was the graph that HD Tune produced – the USB-attached disk was producing more-or-less consistent results across the whole drive whereas the internal disk tailed off considerably through the test.
There’s a huge difference between benchmark testing and practical use though – I needed to know if the USB disk was still slower than the internal one when it ran with a real workload. I don’t have any sophisticated load testing tools (or experience) so I decided to use the reliability and performance (performance monitor) capabilities in Windows Server 2008 to measure the performance of two identical virtual machines, each running on a different disk.
Brent Ozar has written a good article on using perfmon for SQL performance testing and, whilst my application is running on SQL Server (so the article may help me find bottlenecks if I’m still having issues later), by now I was more interested in the effect of moving the virtual machine between disks. It did suggest some useful counters to use though:
Memory – Available MBytes
Paging File – % Usage
Physical Disk – % Disk Time
Physical Disk – Avg. Disk Queue Length
Physical Disk – Avg. Disk sec/Read
Physical Disk – Avg. Disk sec/Write
Physical Disk – Disk Reads/sec
Physical Disk – Disk Writes/sec
Processor – % Processor Time
System – Processor Queue Length
I set this up to monitor both my internal and external disks, and to log to a third external disk so as to minimise the impact of the logging on the test.
Starting from the same snapshot, I ran the VM on the external disk and monitored the performance as I started the VM, waited for the Windows Vista Welcome screen and then shut it down again. I then repeated the test with another copy of the same VM, from the same snapshot, but running on the internal disk.
Sadly, when I opened the performance monitor file that the data collector had created, the disk counters had not been recorded (which was disappointing) but I did notice that the test had run for 4 minutes and 44 seconds on the internal disk and only taken 3 minutes and 58 seconds on the external one, suggesting that the external disk was actually faster in practice.
I’ll admit that this testing is hardly scientific – I did say that performance testing is not my forte. Ideally I’d research this further and I’ve already spent more time on this than I intended to but, on the face of it, using the slower USB-attached hard disk still seems to improve VM performance because the disk is dedicated to that VM and not being shared with the operating system.
I’d be interested to hear other people’s comments and experience in this area.
I was at an event last week where Gareth Hall, UK Product Manager for Windows Server 2008, commented on the product’s fantastic press reviews, with even Jon Honeyball (who it seems is well known for his less-than-complimentary response to Microsoft’s output of late) commenting that:
“Server 2008 excels in just about every area [… and] is certainly ready for prime time. There’s no need to wait for Service Pack 1”
It seems that, wherever you look, Windows Server 2008 is almost universally acclaimed. And rightly so – I believe that it is a fantastic operating system release (let’s face it, Windows Server 2003 and R2 were very good too) and is packed full of features that have the potential to add significant value to solutions.
So, tell me, why are the same journalists who think Windows Server 2008 is great, still berating Windows Vista – the client version of the same operating system codebase? Sure, Vista is for a different market, Vista has different features, and it’s only fair to say that Vista took some time to bed down, but after more than a year of continuous updates and a major service pack is it really that bad?
The trouble is that Microsoft has muddied the water by dropping hints about what the future may hold. What was once arguably the world’s biggest and best marketing machine seems to have lost its way recently – either maintain the silence and keep us guessing what Windows 7 means, or open up and let us decide whether it’s worth the wait. With the current situation, IT Managers are confused: the press are, by and large, critical of Vista; consumers and early adopters have complained of poor device support (not Microsoft’s fault); and even Microsoft seems ready to forget about pushing their current client operating system and move on to the next big thing.
In all my roles – as a consultant, an infrastructure architect, a Microsoft partner and of course as a blogger, I’d love to know more about Windows 7 – and Microsoft does need to be more transparent if it expects customers to make a decision. Instead, they seem to be hoping that hints of something new that’s not Vista will help to sell Enterprise Agreements (complete with Software Assurance) to corporates.
The trouble with running Microsoft Hyper-V on a notebook PC is that notebook PCs typically don’t have large hard disks. Add a few snapshots and a virtual machine (VM) can quickly run into tens or even hundreds of gigabytes and that meant that I needed to move my VMs onto an external hard disk.
In theory at last, there should also be a performance increase by moving the VMs off the system disk and onto a separate spindle; however that’s not straightforward on a notebook PC as second disks will (normally) be external (and therefore using a slower USB 2.0 interface, rather than the internal SATA controller) – anyway, in my case, disk space was a more important than any potential performance hit.
The exported VM is still not ready to run though – it needs to be imported again but the import operation is faster as it doesn’t involve copying the .VHD file (and any associated snapshots) to a new location. After checking that the newly imported VM (with disk and snapshot storage on the external drive) would fire up, I deleted the original version. Or, more accurately, I would have done if I hadn’t run out of disk space in the meantime (Windows Server 2008 doesn’t like it when you leave it with only a few MB of free space).
Deleting VMs is normally straightforward, but my machine got stuck half way through the “destroy” process (due to the lack of hard disk space upsetting my system’s stability) and I failed to recover from this, so I manually deleted the files and restarted. At this point, Hyper-V manager thought that the original VM was still present but any attempt to modify VM settings resulted in an error (not surprising as I’d deleted the virtual machine’s configuration file and the virtual hard disks). What I hadn’t removed though was the shortcut (symbolic link) from the to my external hard disk. Deleting this file from %systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Virtual Machines and refreshing Hyper-V Manager left me with a clean management console again.