Microsoft Hyper-V: A reminder of where we’re at

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week I saw a tweet from the MIX 2011 conference that highlighted how Microsoft’s Office 365 software as a service platform runs entirely on their Hyper-V hypervisor.

There are those (generally those who have a big investment in VMware technologies) who say Microsoft’s hypervisor lacks the features to make it suitable for use in the enterprise. I don’t know how much bigger you have to get than Office 365, but the choice of hypervisor is becoming less and less relevant as we move away from infrastructure and concentrate more on the platform.

Even so, now that Hyper-V has reached the magical version 3 milestone (at which people generally start to accept Microsoft products) I thought it was worth a post to look back at where Hyper-V has come from, and where it’s at now:

Looking at some of the technical features:

  • Dynamic memory requires Windows 2003 SP2 or later (and is not yet supported for Linux guests). It’s important to understand the differences between over subscription and over commitment.
  • Performance is as close as no difference for differentiator between hypervisors.
  • Hyper-V uses Windows clustering for high availability – the same technology as is used for live migration.
  • In terms of storage scalability – it’s up to the customer to choose how to slice/dice storage – with partner support for multipathing, hardware snapshotting, etc. Hyper-V users can have 1 LUN for each VM, or for 1000 VMs (of course, no-one would actually do this).
  • Networking also uses the partner ecosystem – for example HP creates software to allow NIC teaming on its servers, and Hyper-V can use a virtual switch to point to this.
  • In terms of data protection, the volume shadow copy service on the host is used an there are a number of choices to make around agent placement. A single agent can be deployed to the host, with all guests protected (allowing whole machine recovery) or guests can have their own agents to allow backups at the application level (for Exchange, SQL Server, etc.).

I’m sure that competitor products may have a longer list of features but in terms of capability, Hyper-V is “good enough” for most scenarios I can think of – I’d be interested to hear what barriers to enterprise adoption people see for Hyper-V?

Installing Ubuntu (10.4) on Windows Virtual PC

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I use a Windows 7 notebook at work but, sometimes, it’s just easier to drop back into a Unix or Linux machine – for example when I was checking out command line Twitter clients a few days ago (yes, there is a Windows one, but Twidge is more functional).  After all, as one of my friends at Microsoft reminds me, it is just an operating system after all…

Anyway, I wanted to install Ubuntu 10.4 in a virtual machine and, as I have Windows Virtual PC installed on my notebook, I didn’t want to use another virtual machine manager (most of the advice on the subject seems to suggest using Virtual Box or VMware Workstation, which is a workaround – not a solution).  My first attempts were unsuccessful but then I stumbled upon a forum thread that helped me considerably – thanks to MrDerekBush and pessimism on the Ubuntu forums – this is what I found I needed to do:

  1. Create a virtual machine in Windows Virtual PC as normal – it’s fine to use a dynamic disk – and boot from an Ubuntu disk image (i.e. an ISO, or physical media).
  2. At the language selection screen, hit Escape, then F6 and bring up the boot options string.  Delete the part that says quiet splash -- and replace it with vga=788 noreplace-paravirt (other vga boot codes may work too).
  3. Select the option to try Ubuntu without installing then, once the desktop environment is fully loaded, select the option to install Ubuntu and follow the prompts.
  4. At the end of the installation, do not restart the virtual machine – there are some changes required to the boot loader (and Ubuntu 10.4 uses GRUB2, so some of the advice on the ‘net does not apply).
  5. From Places, double click the icon that represents the virtual hard disk (probably something like 135GB file system if you have a default sized virtual hard disk). Then, open a Terminal session and type mount, to get the volume identifier.
  6. Enter the following commands:
    sudo mount -o bind /dev /media/volumeidentifier/dev
    sudo chroot /media/volumeidentifier/ /bin/bash
    mount -t proc none /proc
    nano /etc/default/grub
  7. Replace quiet splash with vga=788 and comment out the grub_hidden_timeout line (using #) in /etc/default/grub, then save the file and exit nano.
  8. Enter the following command:
    nano /etc/grub.d/10_linux
  9. In the linux_entry section, change args="$4" to args="$4 noreplace-paravirt", then save the file and exit nano.
  10. Enter the update-grub command and ignore any error messages about not being able to find the list of partitions.
  11. Shut down the virtual machine.  At this point I was left with a message about Casper resyncing snapshots and, ever after leaving the VM for a considerable period it did not progress further.  I hibernated the VM and when I resumed it, it rebooted and Ubuntu loaded as normal.

There are still a few things I need to sort out: there are no Virtual Machine Additions for Linux on Virtual PC (only for Hyper-V), which means no mouse/keyboard integration; and the Ctrl-Alt-left arrow release key combination clashes with the defaults for Intel graphics card drivers (there are some useful Virtual PC keyboard shortcuts).  Even so, getting the OS up and running is a start!

Windows 7, BitLocker, Ubuntu, and the case of the missing disk partitions

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last Thursday was probably best described as “a bad IT day” – over the course of the day I “lost” the partition structure on my netbook’s hard disk, and then got it back again.  It all started because I wanted to dual boot Windows 7 and Ubuntu – and, although I’ve still not managed to acheive that goal, I did learn a bit about recovering Windows along the way…

Since last October, my netbook has been running Windows 7 Ultimate Edition with a BitLocker encrypted hard drive.  It’s been working well but I wanted to fire up an installation of Ubuntu from time to time, so I decided to see if I could dual-boot the two systems.  Clearly I wouldn’t be able to run Ubuntu from a partition that Windows had encrypted (I did briefly consider running Linux as a VM) but I was able to shrink the Windows partition in the Disk Management console and free up around 60GB of hard disk space, with a view to following Microsoft’s advice for dual booting Windows and Linux with BitLocker enabled (although my netbook does not have a TPM so I’m not sure if it would work for me).

I tried to run the installer for Ubuntu 10.04 Netbook Edition but it saw my disk as one chunk of unallocated space, with no existing operating systems installed.  As I knew there were two NTFS partitions there and I didn’t want to wipe them, I quit the installer and rebooted into Windows.

It seemed logical that BitLocker was preventing Ubuntu from seeing the true state of the disk, so I first tried disabling BitLocker, and then removing it altogether (the difference between disabling and removing BitLocker is described on the Microsoft website).  Unfortunately that didn’t make any difference, as far as Ubuntu was concerned, the disk was entirely free for it to do as it liked.

I checked in Windows and, as I thought, it was a basic disk (not dynamic), so I tried rewriting the master boot record (MBR) using the bootrec.exe utility with the /fixmbr switch (as described in Microsoft knowledge base article 927392).  That still didn’t help and, after crowdsourcing for advice I tried a number of utilities to take a look at the disk:

  • Acronis Disk Director Suite agreed with Windows – it saw that I had 64.42GB of unallocated space at the end of the disk, plus 2 primary NTFS partitions (100MB System Reserved and 84.53GB with Windows 7 on it).  It also confirms that the disk is an MBR type (0x0AA55).
  • Ubuntu’s disk utility also saw the NTFS partitions, but thought the disk was a GUID Partition Table (GPT) and complained when I tried to create an ext4 partition in the free space (the message included reference to “MS-DOS Magic” and said that the disk looked like a GPT disk with the remains of an MBR layout present).
  • Gnome Partition Editor (GPartEd), when run from the Ubuntu installer CD, thought the disk was 149.05GiB of unallocated space, but, when run from the GPartEd Live CD, it saw the NTFS partitions (as unformatted) and even allowed me to create an ext4 partition in the free space at the end of the disk.  Unfortunately that also prevented Windows from booting…

At this point, I had no operating system at all, so I booted from a Windows 7 System Repair Disc I had created earlier (just in case).  I tried to repair my system but the message I got back said:

This version of System Recovery Options is not compatible with the version of Windows you are trying to repair.  Try using a recovery disc that is compatible with this version of Windows.

Not too many clues there then… only when I tried to reinstall Windows (thinking it might put the OS back and move the old installation to Windows.old or something similar) did Windows Setup give me a helpful message to tell me that the disk had a GPT layout and that it couldn’t be installed onto such a disk.  It turns out that was as a result of my efforts to create an ext4 partition in the free space at the end of the disk but, sure enough, when I booted into Windows PE and ran diskpart.exe, disk 0 was showing as type GPT.  Thankfully, diskpart.exe was also happy for me to run the convert mbr command (also described in Microsoft knowledge base article 282793), after which I could run a Startup Repair Recovery Tool from the System Repair Disc.  At the end of this, I restarted the netbook and Windows 7 came up as if nothing was wrong… phew!

Incidentally, if Windows thinks the disk is not empty, it will not convert from GPT to MBR, so the trick is to use something like GPartEd to make a change (write a partition in the empty space and then remove it again), after which the convert mbr command will work.  I know this, because I borked the system again as I tried once more to see if either the Ubuntu Disk Utility or GPartEd would create the disk layout that I required.  They didn’t, and the Ubuntu installer still refused to recognise the existing NTFS partitions (Ubuntu’s dual-boot advice doesn’t really seem to explain why either).

I’ve tried the alternative installer for Ubuntu Desktop too but that also sees the disk as one big 160GB lump of free space.  Unfortunately the Ubuntu installer for Windows (WUBI) won’t help as it installs Ubuntu side by side on the same partition as Windows and, critically for a portable device, does not support hibernation.  What I still don’t understand is why Linux utilities think the disk is a GPT disk, and Windows sees it as MBR… as William Hilsum tweeted to me, “[perhaps] Bitlocker does more at the disk level than [he] thought”.

Moving to Linux from Solaris?

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Oracle’s acquisition of Sun Microsystems has probably caused a few concerns for Sun customers and, today, Oracle message to Sun customersOracle reaffirmed their commitment to Sun’s Solaris operating system and SPARC-based hardware with a statement from Oracle CEO, Larry Ellison, that tells customers they plan to invest in the Solaris and SPARC platforms, including tighter integration with Oracle software.

Nevertheless, as one of my colleagues pointed out to me today, it costs billions to create new microprocessors and more and more customers are starting to consider running Unix workloads on industry standard x64 servers. With significant differences between the x64 and SPARC versions of Solaris, it’s not surprising that, faced with an uncertain future for Sun, organisations may consider Linux as an alternative and, a few weeks back, I attended a webcast given by Red Hat and HP that looked at some of the issues.

I’m no Linux or Unix expert but I made a few notes during that webcast, and thought they might be useful to others so I’m sharing them here:

  • Reasons to migrate to Linux (from Unix):
    • Tactical business drivers: reduce support costs (on end of life server hardware); increase capability with upgraded enterprise applications; improve performance and reduce cost by porting custom applications.
    • Strategic business drivers: simplify and consolidate; accelerate service development; reduce infrastructure costs.
  • HP has some tools to assist with the transition, including:
    • Solaris to Linux software transition kit (STK)– which , although aimed at migrating to HP-UX, this joint presentation between HP and Red Hat suggested using it to plan and estimate the effort in migrating to Linux with C and C++ source code, shell scripts and makefiles for tools that can scan applications for transition impacts.
    • Solaris to Linux porting kit (SLPK)which includes compiler tools, libraries, header files and source scanners to recompile Solaris applications for either Red Hat or SUSE Linux running on HP ProLiant servers.
  • The top 5 issues likely to affect a transition are:
    1. Complier issues – differing development environments.
    2. ANSI C/C++ compliance – depending on the conformance to standards of the source code and compiler, there may be interface changes, namespace changes, library differences, and warnings may become errors.
    3. Endianness – SPARC uses a big-endian system, Linux is little-endian.  This is most likely to affect data exchange/communications between platforms with access to shared memory and to binary data structures in files.
    4. Differences in commands, system calls and tools – whether they are user level commands (e.g. a Solaris ping will return a message that a host is alive whereas a Linux ping will continue until interrupted), systems management commands, system API calls, software management (for package management) or operating system layered products (e.g. performance management, high availability or systems management).
    5. ISV product migration with issues around the availability of Linux versions of software; upgrades; and data migration.
  • When planning a migration, the strategic activities are:
    • Solaris vs. Linux ecosystem analysis.
    • Functional application analysis.
    • Organisational readiness and risk analysis.
    • Strategic migration roadmap creation.
    • Migration implementation.
  • Because of the differences between operating systems, it may be that some built-in functions need to be replaced by infrastructure applications (or vice versa). Indeed, there are four main scenarios to consider:
    • Solaris built-in function to Linux built-in function (and although many functions may map directly others, e.g. virtualisation approaches, may differ).
    • Solaris infrastructure application to Linux built-in function.
    • Solaris infrastructure application to Linux infrastructure application.
    • Solaris built-in function to Linux infrastructure application.
  • Finally, when it comes to deployment there are also a number of scenarios to consider:
    • Consolidation: Many Sun servers (e.g. Enterprise 420R) to fewer industry standard servers (e.g. HP ProLiant).
    • Aggregation: Sun Fire V890/V490 servers to Itanium-based servers (e.g. HP Integrity).
    • Dispersion: Sun Fire E25K server(s) to several industry standard servers (e.g. HP ProLiant).
    • Cloud migration: Sun servers to Linux-based cloud offerings, such as those offered by Amazon, RackSpace, etc.

Many of these notes would be equally applicable when migrating between Unix variants – and at least there are tools available to assist. But, now I come to think of it, I guess the same approach can be applied to migrating from Unix/Linux to Windows Server… oh, look out, is that flaming torches and pitchforks I see being brandished in my direction?

Creating an iSCSI target on a Netgear ReadyNAS

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I wrote that I was looking for an iSCSI target add-on for my Netgear ReadyNAS Duo. I asked if such an add-on was available on Netgear’s ReadyNAS community forums; however it seems that these are not really a true indication of what is available and the moderators are heavily biased by what Netgear supports, rather than what can be done. Thanks to Garry Martin, who pointed me in the direction of Stefan Rubner’s ReadyNAS port of the iSCSI Enterprise Target Project, I now have a ReadyNAS acting as an iSCSI target.

I have a lot of data on my first ReadyNAS and, even though I backed it all up to a new 1.5TB drive in my server (which will eventually be swapped into the the ReadyNAS as part of the next X-RAID upgrade), I wasn’t prepared to risk losing it so I bought a second ReadyNAS to act as an iSCSI target for serving virtual machine images. In short, don’t run this on your ReadyNAS unless you are reasonably confident at a Linux command prompt and you have a backup of your data. This worked for me but your mileage may vary – and, if it all goes wrong and takes your data with it, please don’t blame me.

First up, I updated my ReadyNAS to the latest software release (at the time of writing, that’s RAIDiator version 4.1.6). Next, I enabled SSH access using the Updates page in FrontView with the EnableRootSSH and ToggleSSH addons (note that these do not actually install any user interface elements: EnableRootSSH does exactly what it says, and when it’s complete the root password will be set to match the admin password; ToggleSSH will enable/disable SSH each time the update is run).

The next step was to install the latest stable version (v0.4.17-1.0.1) of Stefan Rubner’s iSCSI target add-on for ReadyNAS (as for EnableRootSSH and ToggleSSH, it is simply applied as an update in FrontView).

With SSH enabled on the ReadyNAS, I switched to using a Mac (as it has a Unix command prompt which includes an SSH client) but any Unix/Linux PC, or a Windows PC running something like PuTTY will work too:

ssh root@ipaddress

After changing directory to /etc (cd /etc), I checked for an existing ietd.conf file and found that there was an empty one there as ls-al ie* returned:

-rw-r–r–    1 admin    admin           0 Dec  3  2008 ietd.conf

I renamed this (mv ietd.conf ietd.conf.original) and downloaded a pre-configured version with wget http://readynasfreeware.org/gf/download/docmanfileversion/3/81/ietd.conf before editing the first line (vi ietd.conf) to change the IQN for the iSCSI target (a vi cheat sheet might come in useful here).

As noted in the installation instructions, the most important part of this file is the Lun 0 Path=/c/iscsi_0,Type=fileio entry. I was happy with this filename, but it can be edited if required. Next, I created a 250GB file to act as this iSCSI LUN using dd if=/dev/zero of=/c/iscsi_0 bs=10485760 count=25600. Beware, this takes a long time (I went to the pub, came back, wrote a good chunk of this blog post and it was still chugging away for just over 4 hours; however it’s possible to get some idea of progress by watching the amount of free space reported in FrontView).

At this point, I began to deviate from the installation notes – attempting to run /etc/init.d/rfw-iscsi-target start failed so I rebooted the ReadyNAS but when I checked the Installed Add-ons page in FrontView I saw that the iSCSI target was already running although the target was listed as NOT_FOUND and clicking the Configure Targets button seemed to have no effect (I later found that was an IE8 issue – the button produced a pop-up when I ran it from Safari over on my Mac and presumably would have worked in another browser on Windows too).

I changed the target name to /c/iscsi_0, saved the changes, and restarted the ReadyNAS again (just to be sure, although I could have restarted the service from the command line), checking that there was a green light next to the iSCSI target service in FrontView (also running /etc/init.d/rfw-iscsi-target status on the command line).

ReadyNAS iSCSI Target add-on configuration

With the target running, I switched to my client (a Windows Server 2008 computer) and ran the iSCSI initiator, adding a portal on the Discovery tab (using the IP address of the ReadyNAS box and the default port of 3260), then switching to the Targets tab and clicking the Refresh button. I selected my target and clicked Log On… waiting with baited breath.

Windows iSCSI initiator Discovery tabWindows iSCSI initiator Discovery tab

iSCSI target exposed in Disk Management

No error messages indicated that everything was working so I switched to Server Manager and saw a new 250GB unallocated disk in Disk Management, which I then brought online and initialised.

Finally, I updated /etc/rc6.d/S99reboot to include /etc/init.d/rfw-iscsi-target stop just before the line that says # Save the last 5 ecounters by date.

Red Hat Enterprise Virtualisation (aka “me too!”)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this month, I managed to attend a Red Hat webcast about their forthcoming virtualisation products. Although Red Hat Enterprise Linux has included the Xen hypervisor used by Citrix for a while now (as do other Linux distros), it seems that Red Hat wants to play in the enterprise virtualisation space with a new platform and management tools, directly competing with Citrix XenServer/Essentials, Microsoft Hyper-V/System Center Virtual Machine Manager and parts of the VMware portfolio.

Red Hat Enterprise Virtualisation (RHEV) is scheduled for release in late 2009 and is currently in private beta. It’s a standalone hypervisor, based on a RHEL kernel with KVM, and is expected to be less than 100MB in size. Bootable from PXE, flash, local disk or SAN it will support up to 96 processing cores and 1TB of RAM, with VMs up to 16 vCPUs and 256GB of RAM. Red Hat is claiming that its high-performance virtual input/output drivers and PCI-pass through direct IO will allow RHEV to offer 98% of the performance of a physical (bare metal) solution. In addition, RHEV includes the dynamic memory page sharing technology that only Microsoft is unable to offer on it’s hypervisor right now; SELinux for isolation; live migration; snapshots; and thin provisioning.

As RHEV approaches launch, it is expected that there will be announcements regarding support for Windows operating systems under Microsoft’s Strategic Virtualisation Validation Program (SVVP), ensuring that customers with a heterogeneous environment (so, almost everyone then) are supported on their platform.

Red Hat seem keep to point out that they are not dropping support for Xen, with support continuing through to at least 2014, on an x86 platform; however the reality is that Xen is being dropped in favour of KVM, which runs inside the kernel and is a full type 1 hypervisor, supporting guests from RHEL3 to 5, and from Windows 2000 to Vista and Server 2008 (presumably soon to include Windows 7 and Server 2008 R2). RHEV is an x64 only solution and makes extensive use of hardware assisted virtualisation, with directed I/O (Intel VT-d/AMD IOMMU) used for secure PCI passthrough together with PCI single root IO virtualisation so that multiple virtual operating systems can achieve native I/O performance for network and block devices.

It all sounds great, but we already have at least three capable hypervisors in the x64 space and they are fast becoming commodity technologies. The real story is with management and Red Hat is also introducing an RHEV Manager product. In many ways it’s no different to other virtualisation management platforms – offering GUI and CLI interfaces for the usual functionality around live migration, high availability, system scheduling, image deployment, power saving and a maintenance mode but one feature I was impressed with (that I don’t remember seeing in System Center Virtual Machine Manager, although I may be mistaken) is a search-driven user interface. Whilst many virtual machine management products have the ability to tag virtual machines for grouping, etc., RHEV Manager can return results based on queries such as, show me all the virtualisation hosts running above 85% utilisation. What it doesn’t have, that SCVMM does (when integrated with SCOM) and that VirtualCenter does (when integrated with Zenoss) is the ability to manage the virtual and physical machine workloads as one, nor can RHEV Manager manage virtual machines running on another virtualisation platform.

The third part of Red Hat’s virtualisation portfolio is RHEV Manager for desktops – a virtual desktop infrastructure offering using the simple protocol for independent computing environments (SPICE) adaptive remote rendering technology to connect to Red Hat’s own connection broker service from within a web browser client using ActiveX or .XPI extensions. In addition to brokering, image management, provisioning, high availability management and event management, RHEV for desktops integrates with LDAP directories (including Microsoft Active Directory) and provides a certificate management console.

Red hat claim that their VDI experience is indistinguishable from a physical desktop including 32-bit colour, high quality streaming video, multi-monitor support (up to 4 monitors), bi-directional audio and video (for VoIP and video conferencing), USB device redirection and WAN optimisation compression. Microsoft’s RDP client can now offer most of these features, but it’s the Citrix ICA client that Red Hat really need to beat.

It does seem that Red Hat has some great new virtualisation products coming out and I’m sure there will be more announcements at next month’s Red Hat Summit in Chicago but now I can see how the VMware guys felt when Microsoft came out with Hyper-V and SCVMM. There is more than a little bit of “me too” here from Red Hat, with, on the face of it, very little true innovation. I’m not writing off RHEV just yet but they really are a little late to market here, with VMware way out in front, Citrix and Microsoft catching up fast, and Red Hat only just getting started.

If Microsoft Windows and Office are no longer relevant then why are #wpc09 and Office 2010 two of the top 10 topics on Twitter right now?

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, I read somebody claiming that Microsoft is no longer relevant in our increasingly online and connected society and how we’re all moving to a world of cloud computing and device independence where Google and other younger and more agile organisations are going to run our lives. Oh yes and it will also be the year of Linux on the desktop!

Then I spend an afternoon listening to a Microsoft conference keynote, like the PDC ones last Autumn/Fall (announcing Windows Azure and the next generation of client computing), or today’s Worldwide Partner Conference and I realise Microsoft does have a vision and that, under Ray Ozzie’s leadership, they do understand the influence of social networks and other web technologies. That’ll be why, as I’m writing this, the number 6 and 7 topics on Twitter are Office 2010 and #wpc09.

Office 2010 and #WPC09 trending on Twitter

Competition is good (I’m looking forward to seeing how the new Ubuntu Google OS works out and will probably run it on at least one of my machines) but I’m really heartened by some of this afternoon’s announcements (which I’ll write up in another blog post).

Meanwhile, for those who say that Windows 7 will be Microsoft’s last desktop operating system, perhaps this excerpt from a BBC interview with Ray Ozzie will be enough to convince them that the concept of an operating system is not dead… it’s just changing shape:

(Credit is due to Michael Pietroforte at 4sysops for highlighting the existence of this video footage.)

Handy Linux distro that can be built on the fly

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of days back, my friend Stuart and I were trying to configure a device via a serial port. You’re probably thinking that’s not so hard, just hook up a console cable, fire up a terminal emulator, make sure you have the right settings and you’re good to go, hey?

Except that neither of us had a serial port on our laptops… and the only PC we had available with a serial port wasn’t configured with an operating system (at least not one we had the password for).

Thanks to a great Linux distribution called Slax, we were able to build a boot CD that included minicom in just a few seconds and after downloading and burning we could boot the PC from CD. All it took then was to configure Minicom to use /dev/ttyS0 (if we had used a USB to serial connector it would have been /dev/ttyUSB0), with /var/lock for the lockfile, 9600 8N1 Baud/parity, hardware flow control on, software flow control off and we were connecting to the console output (David Sudjiman described something similar to configure his Cisco router from Ubuntu).

I’m sure I could have used an alternative, like Knoppix but the real beauty of Slax was the ability to create a custom build on the fly with just the modules that are required. I could even put it on a USB stick…

Reading around on the ‘net afterwards I came across Van Emery’s Linux Serial Console HowTo which turns things around the other way (using a serial port to get into a Linux machine). I though it might be fun (in a retro sort of way) to hook up some dumb terminals to a Linux PC but I’m not sure what I’d do with them though… web browsing via Lynx? A bit of command line e-mail? Definitely a geek project.

Some more useful Hyper-V links

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Regular readers will have realised by now that the frequency of posts on this blog is almost inversely proportional to the amount of my spare time that the day job eats up and, after a period of intense blogging when I had a fairly light workload, the last couple of weeks have left little time for writing (although James Bannan and I did finally record the pilot episode of our new podcast last night… watch this space for more information).

In the absence of my planned post continuing the series on Microsoft Virtualization and looking at application virtualisation (which will make an appearance, just maybe not until next week), here are a few Hyper-V links that might come in useful (supplementing the original list of Hyper-V links I published back in July):

Hate Windows UAC? Have you actually tried the alternatives?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The next time somebody complains about Windows User Account Control (UAC), I’d like them to actually try using a Mac as a standard user (i.e. not the default setting, which is an Administrator, albeit not the root user). I’m in the process of applying Apple’s latest 10 updates, which are huge (I didn’t notice the total for all 10, but I it was well over half a gigabyte – just one HP Printer Driver Update was 142MB and the Mac OS X 10.5.5 update is 321MB).

In the intervening time, during which I’ve been writing this post on another PC, I’ve had to enter my Administrator credentials four five six times to allow Apple Software Update to do its thing. Mac OS X (and Linux) use a time-based system whereby once I’ve entered my elevated credentials they are valid for a set period but at least once I’ve told Windows Update that I do want to install a bunch of updates, that process (and any child processes) are then allowed to continue unhindered. It seems that the answer for me should really be to use setuid and make Apple Software Update run elevated but that is not necessarily a good idea either.

I guess there are advantages and disadvantages to either approach (actually, the time-based approach has a significant weakness in that any process can run elevated during that window) but the real point is that UAC is there for our protection – and it’s not really that big a problem in my experience.

Meanwhile, for hardcore Windows users that would like to implement an equivalent of the Linux/OS X setuid command in Vista (or Windows Server 2008, I guess), Joel Bennett explains how to do it with PowerShell.