Useful Links: February 2009

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Understanding how histograms are used in digital photography

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The backlog of photography-related posts for this blog is now almost as long as the virtual pile of half-written IT-related ones but I’ve been meaning to write something about histograms in relation to digital photography ever since Andy Gailer‘s presentation on working with raw digital camera images at my local camera club last year.

For the first few years that I owned a digital camera, I ignored the histogram – quite simply because I didn’t understand it. Since Andy explained it to me, I find it an incredibly useful tool for helping me ensure that I’ve captured the maximum amount of data in my images.

The histogram, quite simply, shows 256 levels of light (tones) that are present in an image. The peaks indicate a large volume of pixels in a particular tone and the troughs appear where there are fewer pixels. On the left side of the histogram are the dark areas and on the right side are the lighter areas.

Image showing a Royal Mail postbox in the snow (underexposed)Take, for example, the accompanying picture of a post box outside a gift shop. As can be seen, I took this after a snow fall and I didn’t compensate for the light reflected by the snow. Even though the camera’s automatic white balance would have attempted to make some corrections, the snow shows as grey and the histogram tells me that most of the pixels occur in the darker areas to the left of the graph.

A histogram from an underexposed imageMy camera will show me a histogram but if I view this in Adobe Photoshop, I can see a little more detail. As well as some statistical data about the mean levels and the standard deviation, I can hover my mouse pointer over the graph and see how many pixels appear for a given level. In this case I can see that at level 247, there are only 24 pixels (out of more than six million in the entire image) – effectively there is very little happening in the highlights.

Adjusting levels using the histogramBy adjusting the levels, I can alter the highlights (white slider), midtones (grey slider) or shadows (black slider) and improve the overall exposure of the image. Effectively, I set the black and white points and adjust the contrast.

The ideal histogram is evenly distributed with no breaks and a gradual tail off for shadows and highlights. If I hold the Alt key (Option key on a Mac) as I preview the adjustments, I can see the pixels which are being clipped from the image and I should stop just before this become noticeable (Adobe Camera Raw will show clipping as blue or red pixels when this option is enabled). A few points to note are:

  • Over-exposed images will have clipped highlights and under-exposed images will exhibit clipping in the shadow detail.
  • The vertical scale is of no real consequence (it’s just an indication of the number of pixels at any given light level).
  • Clipping is effectively throwing away part of the detail in the image and should be avoided where possible although it can be used to effect (e.g. as a deliberately high- or low-key image).
  • By adjusting the midtones, the overall brightness of the image may be altered without clipping.
  • Low contrast images will have a very narrow histogram, whilst high contrast images will cover more of the graph.
  • Techniques such as high dynamic range (HDR) photography (and tone mapping) can be used to increase the light levels captured in the image – effectively increasing the exposure latitude.
  • For finer control, individual histograms may be viewed for red, green and blue colour channels or the luminosity.

Image showing a Royal Mail postbox in the snow (levels adjusted)In this example, I have adjusted the highlight details by moving the white slider from 255 to 203 (and Photoshop has automatically adjusted the midtone levels for me). Histogram on an adjusted imageThe end result is a picture which appears to be better exposed although looking at the histogram tells me that there is some detail missing from the shot now (effectively 42 levels have been cut out and the remaining 204 levels of light have been redistributed across the scale – hence the gaps in the graph). These gaps/spikes are an indication of a phenomenon known as image posterisation (or banding) which is not very evident in this image but can generally be spotted in areas of shadow, or the sky, caused by a reduction in the bit depth of the image.

There are those who will say that image adjustment using levels is a very crude tool; however it’s useful to demonstrate how to read histograms. Hopefully this post has thrown some light onto what is one of the more technical aspects of digital photography but is also a very useful tool.

A good news story… courtesy of LogMeIn

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days ago, one of my colleagues was telling me about how his father’s stolen laptop has been recovered. I’m leaving out the names (for privacy reasons) but it’s a good news story for the end of the week.

In common with many IT workers, my colleague provides technical support for family members and, to make this easier, he had installed LogMeIn. There are any remote control products available (I just use standard RDP/VNC connections) but the advantages of some of the specialist products include NAT traversal (and other firewall-friendly functionality) and, it seems, alerting when a PC is available (or not).

After his father’s PC was stolen, my colleague started to build a new one for him, so, imagine his surprise when the original machine appeared online! Using the remote control software, he was able to identify the user as they browsed a social networking site and then provide the police with fairly conclusive evidence as to who was using the machine.

In the words of my anonymous friend:

“Go technology…”

Thanks to his use of the LogMeIn software and the user browsing a social networking site (in the process revealing their identity) the computer was recovered and three people spent some time in the custody of the local police force (although I understand the actual recipient of the computer was later released with a caution).

Maybe I’ve finally found a use for social networking websites!

Free Microsoft Virtualization eBook from Microsoft Press

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, Microsoft Press makes free e-books available. I just missed out on the PDF version of the Windows Vista Resource Kit as part of the Microsoft Press 25th anniversary (the offer was only valid for a few days and it expired yesterday… that’s what happens when I don’t keep on top of my e-mail newsletters) but Mitch Tulloch’s book on Understanding Microsoft Virtualization Solutions is also available for free download (I don’t know how long for though… based on previous experience, that link won’t be valid for long).

This book covers Windows Server 2008 Hyper-V, System Center Virtual Machine Manager 2008, Microsoft Application Virtualization 4.5 (App-V), Microsoft Enterprise Desktop Virtualization (MED-V), and Microsoft Virtual Desktop Infrastructure. If you’re looking to learn about any of these technologies, it would be a good place to start.

Hyper-V Q&A

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Windows Server 2008 Hyper-VMicrosoft’s hypervisor-based virtualisation platform (Hyper-V) has been around for a few months now and, even though there is a whole host of information out there on the web, it’s still a source of confusion for many.

This post is a list of questions and answers for those trying to get started with the Microsoft hypervisor. It is based, in part, on information provided during the Hyper-V technology adoption programme and has been used with the kind permission of Microsoft Windows Virtualization product team, supplemented with additional information where appropriate.

Installation

Q. What are my options for installing Hyper-V?
A. Hyper-V is available as a role for x64 Editions of Windows Server 2008 Standard, Enterprise or Datacentre editions (i.e. not for 32-bit x86 or Itanium architectures, nor for web edition). The Hyper-V role is supported on either a server core or a full installation; however server core is recommended, due to its increased security. In addition, there is a standalone version of Hyper-V – Microsoft Hyper-V Server 2008 – designed for organisations who would like the benefits of Hyper-V but who do not run Windows (a comparison of features in the various Hyper-V products is available).

Q. I installed Hyper-V from the Windows Server 2008 media but it seems to be a pre-release version. Is that right?
A. Windows Server 2008 shipped with a beta version of Hyper-V. It is necessary to install an update to bring the Hyper-V components up to their RTM level, as well as to update the integration components in the virtual machines. John Howard has blogged extensively on obtaining Hyper-V, changes at RTM, upgrade considerations and more.

Q. How can I tell which version of Hyper-V I have installed?
A. Sander Berkouwer has provided some excellent advice for determining the installed version of Hyper-V on his blog (I also linked to some useful resources for those looking to upgrade a pre-release copy of Hyper-V).

Q. I’m not entirely comfortable with server core – how can I install the Hyper-V role?
A. John Howard has created step by step guidance on installing Hyper-V on Server Core. The basic steps are:

  1. After installation, logon to Windows and set a password.
  2. Configure the computername, firewall exceptions, remote desktop connection and then restart using the following commands (for more information on administering server core, see a few commands to get started with Server Core and there are also some alternative utilities available – including the HVconfig utility from Hyper-V Server):
    netdom renamecomputer %computername% /NewName:newcomputername
    netsh advfirewall firewall set rule group="Remote Administration" new enable=yes
    cscript \windows\system32\scregedit.wsf /ar 0
    cscript \windows\system32\scregedit.wsf /cs 0
    shutdown /t 0 /r
  3. Update Hyper-V to the RTM version (see Microsoft knowledge base article 950050)
  4. Enable the Hyper-V role using ocsetup Microsoft-Hyper-V
  5. Restart the computer using shutdown /t 0 /r

Q. Hyper-V relies on hardware assisted virtualisation. How can I tell if my hardware supports this?
A. Michael Pietroforte has written about the free tools that AMD and Intel provide to indicate whether a processor has the necessary virtualisation functionality and John Howard has also written about enabling hardware assisted virtualization in the BIOS. If your machine does not have the option (and some do not!), try a BIOS update.

Performance

Q. How does Hyper-V’s disk input/output (IO) compare with a non-virtualised solution?
A. In order to ensure that IO will never be reported complete until it has been written to the physical disk, Hyper-V does not employ any additional disk caching other than that provided by the guest operating system. In certain circumstances, a Hyper-V VM can appear to provide faster disk access than a physical computer because Hyper-V batches up multiple requests and coalesces interrupts for greater efficiency and performance. In Microsft’s internal testing they also found that:

  • Pass-through disks can sustain physical device throughput.
  • Fixed VHDs can also sustain physical device throughput at the cost of slightly higher CPU usage.
  • Dynamically expanding and differencing VHDs do not usually hit physical throughput numbers due to the overhead of expansion and greater likelihood of disk fragmentation.

Q. How can I measure performance in Hyper-V?
A. The MSDN website features a section on measuring performance on Hyper-V (specifically relating to running BizTalk Server in a VM but equally applicable to many other workloads).

Q. Sometimes, my virtual machines are paused automatically – why does this happen?
A. Rather than let a virtual machine run out of disk space, Hyper-V will pause the VM if the server is running critically low on space. In addition, an event (ID 16050) is written to the Hyper-V VMMS log.

Synthetic device driver model

Q. I’m confused about the various versions of the integration components (ICs) for virtual machines. Why does each release of Hyper-V have it’s own ICs?
A. Integration Components (ICs) are version specific – i.e. the versions used within the child partitions must match the version of Hyper-V that is running in the parent partition (Windows Server 2008 RTM was shipped with the Hyper-V Beta ICs). The Hyper-V RTM upgrade package (see Microsoft knowledge base article 950050) includes the updates for both the parent and child partitions. In addition, there are Hyper-V ICs for Linux and Professional, Enterprise and Ultimate versions of the Windows 7 and Windows Server 2008 R2 betas already come with Hyper-V integration components installed.

Q. Is there a method to incorporate the Hyper-V synthetic devices with Windows Preinstallation Environment (WinPE) for servicing?
A. Performing maintenance on a Hyper-V host from within WinPE represents a challenge for systems administrators in that, without the integration components, virtual hard disks (.VHDs) must be connected to the IDE controller (limiting the number of VHD’s that can be used at any given point in time) and legacy network adapters might be required in order to provide network access. Mike Sterling has a great blog post on using the Hyper-V integration components with WinPE (using the Windows Automated Installation Kit to create a custom WinPE image including the appropriate files extracted from the Hyper-V integration services setup disk). Attaching the resulting .ISO image to a VM and powering it on should provide full access to all synthetic devices.

Management

Q. What tools does Microsoft provide to manage Hyper-V?
A. Out of the box, Microsoft provides a Microsoft Management Console (MMC) snap-in (Hyper-V Manager). This snap-in is also available for x86 (32-bit) versions of Windows Server 2008, as well as for Windows Vista SP1 (x86 or x64 – see also Microsoft knowledge base article 952627). If you have the management tools installed on a Windows Vista machine then you might also find Tore Lervik’s Hyper-V Monitor Gadget for the Windows Sidebar useful.

Whilst Hyper-V Manager is adequate for managing a single host (locally or remotely), remote management with System Center Virtual Machine Manager (SCVMM) 2008 provides a centralised management console, designed to manage thousands of VMs across hundreds of physical servers running Virtual Server 2005, Hyper-V or even VMware ESX via VMware Virtual Center.

Hyper-V can also be managed using Windows Management Instrumentation (WMI), for example in a Windows PowerShell script and there is an open source PowerShell Management Library for Hyper-V available on CodePlex.

Q. My version of Windows Server 2008 does not seem to have the Hyper-V Management tools available.
A. Windows Server 2008 SKUs without Hyper-V or for other architectures (i.e. 32-bit x86 and Itanium) do not include the Hyper-V management tools.

Q. What else can SCVMM offer that the standard management tools do not?
A. Information on SCVMM may be found on the Microsoft website but the main features include:

  • Physical to virtual (P2V) and limited virtual to virtual (V2V) conversion (V2V is from VMware to Hyper-V – for Virtual Server to Hyper-V there is a free tool available (Matthijs ten Seldam’s VMC to Hyper-V Import Tool) and, for conversions from other products or back to physical hardware, various third party tools are available).
  • Orchestration of migration activities (i.e. quick migration for Hyper-V, VMotion for ESX).
  • Intelligent placement of virtual machines.
  • Management of virtual machine templates, virtual hard disks, CD/DVD (.ISO) images, etc.
  • Full integration with Windows PowerShell (with supported PowerShell cmdlets) as well as other System Center products such as System Center Operations Manager and PRO packs.
  • Virtual machine self-service for users to provision their own VMs, based on a quota system.

Q. I have a fully-patched Hyper-V host and SCVMM 2008 installation but SCVMM says my host needs attention. Have I missed something?
A. VirtualBoy Matt McSpirit blogged about a couple of updates required for Hyper-V and BITS when using SCVMM 2008. After installing these and rebooting, everything should be fine.

Q. How can I patch the virtual machines that are held offline (templates, etc.)?
A. Offline VMs may be patched using the Microsoft Offline Virtual Machine Servicing Tool.

Q. I’m trying to configure remote management for Hyper-V and it seems very difficult.
A. It can be! Luckily, John Howard had a few days vacation to use up and the result was a tool called hvremote. You can read more and download the tool on MSDN – but don’t use it if you’re using SCVMM to manage your Hyper-V hosts.

Q. I’m using the Hyper-V Virtual Machine Connection to access the console of one of my Hyper-V virtual machines but every time I press Ctrl+Alt+left to release the mouse (I do not have integration components installed) my screen turns 90°. Have i been infected with a virus?
A. Probably not! Some Intel chipsets use that key combination to rotate the display. Either turn off that functionality in the display driver settings or press Alt+Tab to break out of the VM and change the hotkey in the Hyper-V settings.

Environmental benefits

Q. Virtualisation is often cited as an enabler for green IT – how can that be? Surely I’m just moving the same heat and power requirements into one place?
A. An underutilised server still uses a significant proportion of its maximum power and consolidation of many low-utilisation servers onto a shared infrastructure will normally result in power supplies running more efficiently and a net reduction in power consumption.

By consolidating many servers onto onto a smaller number of servers using virtualisation then many servers may be retired. These older servers are likely to be less efficient than a modern server and will all require cooling, resulting in further power cooling savings.

Whilst disposal of old servers is not very “green”, some servers may be redeployed in scenarios where a physical infrastructure is still required.

Q. Does Hyper-V work in conjunction with the Processor Power Management (PPM) power savings in Windows Server 2008?
A. When the Hyper-V server role is enabled system sleep states (standby and hibernate) are disabled. The major savings in power and cooling requirements are gained by switching servers off and, by viewing overall demand for the entire virtualised infrastructure rather than working at an individual sever level, it is possible to use management technologies to bring servers on and offline in order to meet demand.

Virtual machine settings

Q. With Microsoft Virtual Server, it’s really difficult to access the virtual machine BIOS. Is there still a virtual machine BIOS?
A. Hyper-V VMs do still have a virtual machine BIOS; however, all of the BIOS features (e.g. numlock setting, boot device order, etc.) may be set in the virtual machine configuration or using a script. As a conseqence of this, Microsoft has removed the ability to access the BIOS at boot time.

Q. Hyper-V can import and export its own XML-based VM configurations but not the legacy .VMC format. Is there a way to migrate my Virtual Server and VirtualPC settings to Hyper-V without recreating the configuration manually (I’m not using SCVMM)?
A. As mentioned when discussing V2V migrations above, Matthijs ten Seldam (The author of VMRCplus) was written a VMC to Hyper-V import tool (remove the VM Additions before importing to save effort later).

Storage

Q. Can a virtual machine boot from SAN (FC or iSCSI), NAS, USB disk or Firewire disks (the boot order in the BIOS settings only shows floppy, CD, IDE and network)?
A. Virtual hard disks (VHDs) can be used to boot or run a VM from:

  • Local storage (IDE or SCSI).
  • USB storage (USB key or disk).
  • Firewire storage.
  • SAN Storage Area Network (iSCSI or fibre channel).
  • NAS Network Attached Storage (file share, NAS device).

It’s also possible to assign a non-removable volume (direct attached storage or a SAN LUN) to an IDE channel in the VM settings and to boot from that device.

Q. Is it possible for Hyper-V virtual machines to access USB devices?
A. Not directly and, although many people would like to see this functionality, Microsoft is adamant that this is a client-side virtualisation feature and have no plans to include USB support in the product at this time. There is a workaround, using the Remote Desktop Connection client though (and this approach can also be used for audio).

Q. How can I move VM images to another physical disk?
A. With Hyper-V, the simplest approach I’ve found for moving virtual machines is to export the VM and import it to a new location. Alternatively, you could move the VHD and create a new virtual machine configuration.

Networking

Q. I’m confused by the various network interfaces on my Hyper-V host – what’s going on?
A. It’s not as confusing as it first looks! The parent partition is also virtualised and all communications run via a virtual switch (vswitch). In effect the physical network adapters (pNICs) are unbound from all clients, services and protocols, except the Microsoft Virtual Network Switch Protocol. The virtual network adapters (vNICs) in the parent and child partitions connect to the vswitch. Further vswitches may be created for internal communications, or bound to additional pNICs; however only one vswitch can be bound to a particular pNIC at any one time. Virtual machines can have multiple vNICs connected to multiple vswitches. Ben Armstrong has a good explanation of Hyper-V networking (with pictures) on his blog and I described more in an earlier post on Hyper-V and networking.

Q. Can I use Hyper-V over a wireless connection?
A. Not directly, but there is a workaround, as described in my blog post on Hyper-V and networking.

Q. Is NIC teaming supported?
A. Not by Microsoft, but certain OEMs provide support, and Patrick Lownds has blogged about new Broadcom support for NIC teaming that seems to work with Hyper-V.

Legacy operating system support

Q. The virtual machine settings include a processor option which limits processor functionality to run an older operating system such as Windows NT on the virtual machine. What does this feature actually do?
A. This feature is designed to allow backwards compatibility for older operating systems such as Windows NT 4.0 (which performs a CPUID check and, if CPUID returns more than three leaves, it will fail). By selecting the processor functionality check box Hyper-V will limit CPUID to only return three leaves and therefore allow Windows NT 4.0 to successfully install. It is possible that other legacy operating systems could have a similar issue.

Q. Does this mean that Windows NT 4.0 is supported on Hyper-V?
A. Absolutely not. Windows NT 4.0 is outside its mainstream and extended support lifecycle and is not supported on Hyper-V and no integration components will be released for Windows NT 4.0.

Q. But one of the stated advantages for virtualisation is running legacy operating systems where hardware support is becoming problematic. Does this mean I can’t virtualise my remaining Windows NT computers?
A. The difference here is between “possible” and “supported”. Many legacy (and current) operating systems will run on Hyper-V (with emulated drivers) but are not supported. Windows NT is no longer supported, whether it is running on physical or virtual hardware. Microsoft do highlight that Windows NT 4.0 has been tested and qualified on Virtual Server 2005 and that Virtual Server may be managed (along with Hyper-V and VMware ESX) using System Center Virtual Machine Manager 2008.

Copying files between virtual machines

Q. I want to copy files between Hyper-V virtual machines. Is there a way to do this?
A. Microsoft make a distinction between client-side and server-side virtualisation usage scenarios and note that virtualisation servers are typically managed by a group of administrators who want to deploy a secure, locked down server by default (and do not want additional attack vectors created through virtualisation). This is the reason that Hyper-V does not include shared folder or drag and drop functionality (nor are there any plans to do so at a later date). The options for transferring data from one virtual machine to another are:

  • Setup a virtual network just as you would for physical systems.
  • Use a virtual CD/DVD creation tool and insert a virtual CD/DVD; this can be done while the virtual machine is running.

Microsoft’s stated position is that, in the case of client-side virtualisation, a single user is running a virtualisation product (e.g. Virtual PC) locally ands expects the capability to move files from one virtual machine to another. For this reason, Virtual PC includes shared folder support (but are not set by default).

Q. How does this work if I move a Virtual PC VM with shared folders to a Virtual Server or Hyper-V system?
A. In this case the shared folders guest components won’t load because the required server-side components are not available in Virtual Server or Hyper-V.

Bulk file renames in Windows Explorer

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Working in Windows Explorer today, I noticed something that I found pretty useful. At first I wasn’t sure if it was a new feature in Windows 7 (I’ve since tested and found XP will do something similar, so I guess Vista will too) but I was bulk renaming files and found that Explorer would allow me to rename several files to the same filename root, but suffixed with an identifier to ensure that each file name was unique.

For example, imagine you work for an organisation that produces a lot of customer design documents but which is really keen on re-use. Some customisation is inevitable but you might start out with a template document, or you might work from a documentation set produced for a similar customer environment.

In this example, I have three documents in my set:

Customer X – Infrastructure Design (AD)
Customer X – Infrastructure Design (Exchange)
Customer X – Infrastructure Design (SharePoint)

If I copy and paste these, I now have 6 documents:

Customer X – Infrastructure Design (AD) – Copy
Customer X – Infrastructure Design (AD)
Customer X – Infrastructure Design (Exchange) – Copy
Customer X – Infrastructure Design (Exchange)
Customer X – Infrastructure Design (SharePoint) – Copy
Customer X – Infrastructure Design (SharePoint)

Keeping the copies highlighted, if I right click and select Rename, I can rename them all to a common root of Customer Y - Infrastructure Design. The resulting directory structure is:

Customer X – Infrastructure Design (AD)
Customer X – Infrastructure Design (Exchange)
Customer X – Infrastructure Design (SharePoint)
Customer Y – Infrastructure Design (3)
Customer Y – Infrastructure Design (2)
Customer Y – Infrastructure Design

At this point, it’s now very simple (faster than renaming each file individually and typing the whole filename would have been) to amend the end of each filename to get what I really wanted:

Customer X – Infrastructure Design (AD)
Customer X – Infrastructure Design (Exchange)
Customer X – Infrastructure Design (SharePoint)
Customer Y – Infrastructure Design (AD)
Customer Y – Infrastructure Design (Exchange)
Customer Y – Infrastructure Design (SharePoint)

Some people may be unimpressed. Others may say “script it”. And more people might say use the document templating features in Microsoft Office! Whatever your view, for me, this was a real timesaver.

Installing WordPress on a Mac

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The software platform which markwilson.it runs on is in desperate need of an updated but there is only me to make it happen (supported by ascomi) and if I make a mistake then it may take some time for me to get the site back online (time which I don’t have!). As a result, I really needed a development version of the site to work with.

I thought that it would also be handy if that development version of the site would run offline – i.e. if it were served from a web server on one of my computers. I could run Windows, IIS (or Apache), MySQL and PHP but as the live site runs on CentOS, Apache, MySQL and PHP it makes sense to at least use something similar and my Mac fits the bill nicely, as a default installation of OS X already includes Apache and PHP.

I should note that there are alternative stacks available for running a web server on a Mac (MAMP and XAMPP are examples); however my machine is not a full web server serving hundreds of users, it’s a development workstation serving one user, so the built in tools should be fine. The rest of this post explains what I did to get WordPress 2.7 up and running on OS X 10.5.5.

  1. Open the System Preferences and select the Sharing pane, then enable Web Access.
  2. Web Sharing in OS X

  3. Test access by browsing to the default Apache website at http://computername/ and a personal site at http://computername/~username/.
  4. Download the latest version of MySQL Community Server (I used mysql-5.1.31-osx10.5-x86_64) and run the corresponding packaged installer (for me that was mysql-5.1.31-osx10.5-x86_64.pkg).
  5. After the MySQL installation is completed, copy MySQL.PreferencePane to /Library/PreferencePanes and verify that it is visible in System Preferences (in the Other group).
  6. MySQL Preferences in OS X

  7. Launch the MySQL preference pane and start MySQL Server (if prompted by the firewall to allow mysqld to allow incoming connections, allow this). Optionally, select automatic startup for MySQL.
  8. MySQL running in OS X

  9. Optionally, add /usr/local/mysql/bin to the path (I didn’t do this, as creating a .profile file containing export PATH="$PATH:/usr/local/mysql/bin" seemed to mess up my path somehow – it just means that I need to specify the full path when running mysql commands) and test access to MySQL by running /usr/local/mysql/bin/mysql.
  10. Enable PHP by editing /etc/apache2/httpd.conf (e.g. by running sudo nano /etc/apache2/httpd.conf) to remove the # in front of LoadModule php5_module libexec/apache2/libphp5.so.
  11. Test the PHP configuration by creating a text file named phpinfo.php containing <?php phpinfo(); ?> and browse to http://localhost/~username/phpinfo.
  12. With Mac OS X, Apache, MySQL and PHP enabled, start to work on the configuration by by running /usr/local/mysql/bin/mysql and entering the following commands to secure MySQL:
    drop database test;
    delete from mysql.user where user = '';
    flush privileges;
    set password for root@localhost = password('{newrootpassword}');
    set password for root@127.0.0.1 = password('{newrootpassword}');
    set password for 'root'@'{hostname}.local' = password('{newrootpassword}');
    quit
  13. Test access to MySQL. using the new password with /usr/local/mysql/bin/mysql -u root -p and entering newrootpassword when prompted.
  14. Whilst still logged in to MySQL, enter the following commands to create a database for WordPress and grant permissions (I’m not convinced that all of these commands are required and I do not know what foo is!):
    create database wpdatabasename;
    grant all privileges on wpdatabasename.* to wpuser@localhost identified by 'foo';
    set password for wpuser@localhost = old_password('wppassword');
    quit
  15. Download the latest version of WordPress and extract it to ~username/Sites/ (i chose to put my copy in a subfolder called blog, as it is on the live site).
  16. Configure WordPress to use the database created earlier by copying wordpressdirectory/wp_config_sample.php to wordpressdirectorywp_config.php and editing the following lines:
    define('DB_NAME', 'wpdatabasename');
    define('DB_USER', 'wpuser');
    define('DB_PASSWORD', 'wppassword');
    define('DB_HOST', 'localhost:/tmp/mysql.sock');
  17. Restart Apache using sudo apachectl restart.
  18. If WordPress is running in it’s own subdirectory, copy wordpressdirectory/index.php and wordpressdirectory/.htaccess to ~/Sites/ and then edit index.php so that WordPress can locate it’s environment and templates (require('./wordpressdirectory/wp-blog-header.php');).
  19. Browse to http://localhost/~username/wordpressdirectory/wp-admin/install.php and follow the “five minute WordPress installation process”.
  20. WordPress installation

  21. After installation, the dashboard for the new WordPress site should be available at http://localhost/~username/wordpressdirectory/wp-admin/.
  22. WordPress fresh out of the box (dashboard)

  23. The site may be accessed at http://localhost/~username/wordpressdirectory/.
  24. WordPress fresh out of the box

Credits

I found the following articles extremely useful whilst I was researching this post:

As XenServer goes free, new management tools are promised, and VMware gets another thorn in its side

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I may be a Microsoft MVP for Virtual Machine technology so my bias towards Hyper-V is pretty obvious (I’m also a VMware Certified Professional and I do try to remain fairly objective) but I did have to giggle when a colleague tipped me off this morning about some new developments at Citrix just as VMware are gearing up for next week’s VMworld Europe conference in Cannes.

Officially under embargo until next week (not an embargo that I’ve signed up to though), ZDNet is reporting that Citrix is to offer XenServer for free (XenServer is a commercial product based on the open source Xen project). From my standpoint, this is great news because Citrix and Microsoft already work very closely (Citrix developed the Linux integration for Hyper-V) and Citrix will be selling new management tools which will improve the management experience for both XenServer and Hyper-V but, in addition, Microsoft SCVMM will support XenServer (always expected, but never officially announced), meaning that SCVMM will further improve its position as a “manager of managers” and provide a single point of focus for managing all three of the major hypervisors.

VMware, of course, will respond and tell us that this is not simply a question of software cost (to some extent, they are correct, but many of the high-end features that they offer over the competition are just the icing on the top of the cake), that they have moved on from the hypervisor and how their cloud-centric Virtual Datacentre Operating System model will be the way forward. That may be so, but with server virtualisation now moving into mainstream computing environments and with Citrix and Microsoft already working closely together (along with Novelland now Red Hat), this is almost certainly not good news for VMware’s market dominance.

Further reading

Windows 7 feature walkthoughs

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve just come across a link to a section on the Microsoft TechNet website with walkthroughs for new features in Windows 7. If you’re looking to get up to speed on the next version of Microsoft’s client operating system then these videos might be worth a look (there is an RSS feed to keep up-to-date with new releases).

These videos are not the same series as the “how to” videos I have been producing for TechNet (one of which was featured in the TechNet newsletter last week!) but are actually from the Springboard series.

Spending money to increase organisational agility at a time of economic uncertainty

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I attended a Microsoft TechNet event in Birmingham, looking at Microsoft systems management and the role of System Center. The presenter was Gordon McKenna, who has a stack of experience with the System Center products (in particular System Center Operations Manager, for which he is an MVP) but had only a limited time to talk about the products as the first half of his session was bogged down in Microsoft’s Infrastructure Optimisation (IO) marketing.

Infrastructure optimisationI’ve written about IO before and, since then, I’ve found that the projected savings were often difficult to relate to a customer’s particular business model but as an approach for producing a plan to increase the agility of the IT infrastructure, the Microsoft IO model can be a useful tool.

For me, the most interesting part of Gordon’s IO presentation was seeing the analysis of over 15,000 customers with more than 500 employees who have taken part in these studies. I’m sure that most IT Managers would consider their IT to be fairly standardised, but the figures seem to suggest otherwise:

Basic Standard Rational Dynamic
Core 91.1% 8.7% 0.2% 0.0%
Identity and Access Management 27.9% 62.8% 5.9% 3.4%
Desktop, Device and Server Management 90.1% 8.9% 0.7% 0.3%
Security and Networking 30.5% 65.0% 3.0% 1.4%
Data Protection and Recovery 28.0% 34.9% 37.1% 0.0%
Security Process 81.1% 11.9% 7.0% 0.0%
ITIL/COBIT-based Management Process 68.9% 20.7% 7.1% 3.3%
Business Productivity
Unified Communications (Conferencing, Instant Messaging/Presence, Messaging, Voice) 96.3% 3.5% 0.2% 0.0%
Collaboration (Collaborative Workspaces and Portals) 76.5% 21.3% 1.4% 0.7%
Enterprise Content Management (Document and Records Management, Forms, Web Content Management) 86.7% 12.9% 0.3% 0.2%
Enterprise Search 95.0% 4.1% 0.4% 0.5%
Business Intelligence (Data Warehousing, Performance Management, Reporting and Analysis) 88.3% 10.5% 0.8% 0.3%
Application Platform
User Experience (Client and Web Development) 75.3% 20.9% 2.5% 1.3%
Data Management (Data Infrastructure and Data Warehouse) 87.6% 11.7% 0.6% 0.0%
SOA and Business Process (Process Workflow and Integration) 80.7% 18.0% 1.2% 0.1%
Development (Application Lifecycle Management, Custom Applications, Development Platform) 72.2% 26.2% 1.4% 0.3%
Business Intelligence (Data Warehousing, Performance Management, Reporting and Analysis) 88.3% 10.5% 0.8% 0.3%

As we enter a world recession/depression and money is tight, it’s difficult to justify increases in IT spending – but it’s potentially a different story if those IT projects can help to increase organisational agility and reduce costs overall.

Alinean founder and CEO, Tom Pisello, has identified what he calls “simple savvy savings” to make anyone a cost-cutting hero. In a short white paper, he outlines nine projects that could each save money (and at the same time move the organisation further across the IO model):

  • Server virtualisation (to reduce infrastructure investments, energy and operations overhead costs, and to help improve server administration).
  • Database consolidation.
  • Improved storage management.
  • Leveraging licensing agreements to save money.
  • Implement server systems management to reduce administration costs.
  • Virtualised destop applications to help reduce application management costs.
  • Save on PC engineering costs through PC standardisation and security management.
  • Unified communications.
  • Collaboration.

Looking at the figures quoted above it seems that many IT organisations have a way to go in order to delever the flexibility and return on investment that their businesses demand and a few of these projects could be a step in the right direction.