Server virtualisation using VMware: two real-life stories

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logo BT logo Nationwide logo
Last week, I attended the VMware Beyond Boundaries event in London – the first of a series of events intended to highlight the benefits that server virtualisation technology has to offer. The day included a number of high-quality presentations; however there were two that had particular meaning because they were given by VMware customers – no marketing fluff, just “this is what we found when we implemented VMware in our organisations”. Those organisations weren’t small either – BT (a leading provider of communications solutions serving customers throughout the world) and Nationwide Building Society (the UK’s largest building society – a members-owned financial services provider with mutual status).

Michael Crader, BT’s Head of Development and Test Infrastructure/Head of Windows Migration talked about the BT datacentre server consolidation and virtualisation strategy. This was a particularly interesting presentation because it showed just how dramatic the savings could be from implementing the technology.

BT’s initial virtualisation project was concerned with test and development facilities and used HP ProLiant DL585 servers with 16-48GB RAM, attached to NetApp primary (24TB) and nearline (32TB) storage with each virtual machine on its own LUN. SnapMirror technology allows backing up a virtual machine in 2-3 seconds, facilitating the removal of two roles whereby two staff were solely responsible for loading tapes (with 96 hour backups of the test infrastructure).

The virtualisation of the test and development facilities was so successful that BT moved on to file and print, and then to production sites, where BT are part way through consolidating 1503 WIntel servers in 5 datacentres to three virtual infrastuctures, aiming for:

  • Clearance of rack space (15:1 server consolidation ratio).
  • Reduction in power/heat requirements.
  • Virtualised servers and storage.
  • Rapid deployment.
  • True capacity on demand.

The supporting hardware is still from HP, using AMD Opteron CPUs but this time BT are using (in each datacentre) 36 HP ProLiant BL45 blade servers for hosting virtual machines, each with 32GB RAM, 3 HP ProLiant DL385 servers for management of the infrastructure (VirtualCenter, Microsoft SQL Server and PlateSpin PowerConvert), 4 fibre channel switches and an HP XP12000 SAN – that’s just 10 racks of equipment per datacentre.

This consolidation will eventually allow BT to:

  • Reduce 375 racks of equipment to 30.
  • Reduce power consumption from approximately 700W per server to around 47W, saving approximately £750,000 a year.
  • Consolidate 4509 network connections (3 per server) to 504.
  • Remove all direct attached storage.

At the time of writing, the project has recovered 419 servers, 792 network ports, 58 racks, used 12TB of SAN storage, saved 250KW of power, 800,000 BTU/hour of heat and removed 75 tonnes of redundant equipment – that’s already massive financial savings, management efficiencies, and that reduction in heat and power is good for the environment too!

Michael Crader also outlined what doesn’t work for virtualisation (on ESX Server 2.5.x):

  • Servers which require more than 4 CPUs
  • Servers with external devices attached
  • Heavily loaded Citrix servers.

His main points for others considering similar projects were that:

  • Providing the infrastructure is in place, migration is straightforward (BT are currently hitting 50-60 migrations per week) with the main activities involving auditing, workload management, downtime and managing customer expectations.
  • The virtual infrastructure is truly providing capacity on demand with the ability to deploy new virtual machines in 11-15 minutes.

In another presentation, Peter West, one of Nationwide Building Society’s Enterprise Architects, outlined Nationwide’s server virtualisation strategy. Like many organisations, Nationwide is suffering from physical server sprawl and increased heat per unit of rackspace. As a major user of Microsoft software, Nationwide had previously begun to use Microsoft Virtual Server; however they moved to VMware ESX Server in order to benefit from the product’s robustness, scalability and manageability – and reduced total cost of ownership (TCO) by 35% in doing so (Virtual Server was cheaper to buy – it’s now free – but it cost more to implement and manage).

Nationwide’s approach to virtualisation is phased; however by 2010 they plan to have virtualised 85-90% of the Intel server estate (production, quality assurance/test, and disaster recovery). Currently, they have 3 farms of 10 servers, connected to EMC Clariion storage and are achieving 17-18:1 server consolidation ratios on 4-way servers with data replication between sites.

Peter West explained that Nationwide’s server consolidation approach is more than just technology – it involves automation, configuration and asset management, capacity on demand and service level management – and a scheme known internally as Automated Lights-out Virtualisation Environment (ALiVE) is being implemented, structured around an number of layers:

  • Policy-based automation
  • Security services
  • Resource management services
  • Infrastructure management services
  • Virtualisation services
  • Platforms
  • IP networks

With ALiVE, Nationwide plans to take 700 development virtual servers, 70 physical development servers, a number of virtual machines on a VMware GSX Server platform and 500 physical machines to VMware Infrastructure 3, addressing issues regarding a lack of standard builds, backup/recovery, limited support, and a lack of SLAs along with a growing demand from development projects, to allow self service provisioning of virtual machines via a portal.

At the risk of sounding like an extension of the VMware marketing department, hopefully, these two examples of real-life virtualisation projects have helped to illustrate some of the advantages of the technology, as well as some of the issues that need to be overcome in server virtualisation projects.

Getting Real Player to work on Fedora Core 5

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Fedora logo Helix logo Real logo
I really dislike RealPlayer. This dislike stems from the Windows version of the application, which seems (to me) to install unwanted components and generally act in an intrusive manner; unfortunately the BBC’s streaming audio service uses RealAudio formats (although some content is available for Windows Media) so if I want to listen to BBC radio over the Internet then I need to install this objectionable piece of software – that’s what I’ve spent the last 2 and a bit hours trying to do on my Linux (Fedora Core 5) system here in my hotel room…

Getting hold of the software is easy enough – just download the RealPlayer for Linux from the Real Networks website (there’s even an RPM package). Alternatively there’s a Mozilla-compatible plug-in for access to RealAudio and RealVideo content from within a browser, although I couldn’t seem to get it to work with my Firefox installation (having said that, I have a feeling that some of the later troubleshooting steps I followed to get the RealPlayer working might have worked for the plug-in too).

After downloading RealPlayer 10 for Linux, I opened a terminal session, and entered the following commands:

su –
rpm -ivh RealPlayer10GOLD.rpm
cd /usr/local/RealPlayer/postinst/
./postinst.sh

I had hoped that this would be all I needed to do, but I still couldn’t access audio from the BBC website. Following advice from a tutorial that includes information on Mozilla plugins I ran yum -y install mozplugger; however this didn’t seem to help – each time I accessed RealAudio content from the web, the Helix Player (upon which RealPlayer for Linux is based) launched and displayed the following message:

Component Missing
The player does not have the capabilities to play back this content.

This content is supported by RealPlayer.

Clicking on the details button highlighted that the player was looking for the protocol_rtsp_rdt component but googling didn’t turn up much on this. I also checked out the BBC’s audio help advice for Linux/Unix users without too much luck. One tip that may have helped (from a Linux Questions forum post) was to create a symbolic link to the RealPlayer plugins for Firefox:

ln -s /usr/local/RealPlayer/mozilla/* /usr/lib/firefox-1.5.0.6/plugins

I finally got a break when I noticed that the Real Player 10 shortcut on the Applications menu didn’t seem to do anything. Looking at the properties for the shortcut (using smeg) highlighted the command as realplay so I issued the same command from a terminal. This gave me a useful message:

/usr/local/RealPlayer/realplay.bin: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory.

Following Stanton Finley’s Fedora Core 5 installation notes, I ran yum -y install compat-libstdc++-33 after which the realplay command launched the RealPlayer Setup Assistant and I successfully played RealAudio and RealVideo test clips directly in the RealPlayer; however accessing RealAudio content from within Firefox still launched the Helix Player, complete with the Component Missing error. Not really knowing how to use MozPlugger (other than to view about:plugins), I checked the version numbers for the two players and found that Helix Player reported its version number as v1.0.6.778 (experimental) whereas RealPlayer was v10.0.8.805 (gold). Rather than upgrading Helix Player, I removed it using yum -e HelixPlayer and found that, although this also removed the RealPlayer 10 application shortcut, I could still call realplay from a shell and RealMedia content from NPR and the BBC ran successfully both within RealPlayer and Firefox.

So, that’s RealPlayer working on Fedora Core 5… not exactly painless, and probably not the best way of doing it (some of these steps may well be unnecessary) – hopefully writing these notes up will save someone else a load of time.

Some more about VMware Infrastructure 3

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logo
Last week I wrote an introduction to VMware Infrastructure 3. That was based on my experiences of getting to know the product, so it was interesting to see VMware‘s Jeremy van Doorn and Richard Garsthagen provide a live demonstration at the VMware Beyond Boundaries event in London yesterday. What follows summarises the demo and should probably be read in conjunction with my original article.

Virtual Infrastructure 3 is designed for production use, allowing advanced functionality such as high availability to be implemented even more easily than using physical hardware – not just with current versions of Windows – VMware ESX Server 3.0 can run any x86 operating system including non-Windows operating systems (e.g. Sun Solaris), future Windows releases (e.g. Windows Vista) and even terminal servers.

Because virtual machines are just files on disk, it is simple to create a new server from a template and if a particular operator should only be given access to a subset of the servers then it is just a few clicks in the Virtual Infrastructure Client to delegate access and ensure that only those parts of the infrastructure for which a user has been assign permissions are visible. There’s also a browser-based administration client (Virtual Infrastructure Web Client) and URLs can be created to direct a user straight to the required virtual machine.

VMware demonstrated live server migration using VMotion with a remote desktop connection to a virtual machine which was running a continuous ping command, as well as a utility to keep the CPU busy and playing the Tetris game with no break in service. The then explained that because multiple servers can have access to the same data storage (i.e. VMFS on a shared LUN), migration is simply a case of one server releasing control of the virtual machine and another taking it on (provided that both machines have CPUs from the same processor family).

They then went on to drag a virtual machine between test and production resource pools, allowing access to more computing resources and after a couple of minutes the %CPU time allocated to the virtual machine could be seen to increase (recorded by a VMware script – not Windows Task Manager, which showed the machine as running at 100% already). It should be noted that there are limits to the resources that a virtual machine can use – each virtual machine can only exist on a single physical server at any one time, and even with VMware Virtual SMP is limited to accessing 4 CPUs and 16GB of RAM.

The environment was then extended by adding a new host to the VMware cluster within VirtualCenter and the VMware dynamic resource scheduling (DRS)functionality demonstrated, as virtual machines were automatically migrated between hosts to spread the load between servers. Then, to demonstrate a failure of a single host, one of the servers was simply switched off! Within about two minutes all virtual machines had successfully migrated elsewhere (using VMware high availability) and although there was an obvious break in service, it was only for a few minutes.

Richard Garsthagen then made the point that VMware (as a company) is not just about virtualisation – it’s about rethinking common IT tasks and he demonstrated the VMware consolidated backup (VCB) functionality whereby a backup proxy was used to take a point in time (snapshot) copy of a virtual machine without any break in service (just a message on the screen to say that the machine was being backed up), whilst maintaining consistency of data. VMware did highlight however that VCB is not a backup product itself – it’s an enabling technology that can be integrated with other products.

Turning to virtualisation of the desktop, VMware then demonstrated their Virtual Desktop Infrastructure product which makes virtual desktops available to users via a web portal with links that will start a VM in a remote desktop session. Provisioning a virtual machine to a user is a simple as assigning access in the Virtual Infrastructure Client.

Finally, a short glimpse was given into the Akimbi Slingshot product, recently purchased by VMware, which allows self provisioning of an isolated laboratory environment from a web client.

I’ve seen a lot of demonstrations over the years and, apart from a slight hiccup with the VCB demo when Richard Garsthagen closed the command window just as the backup started, this was one of the smoothest demos I’ve seen of some advanced operations, which in the physical world would require expensive (and complex) hardware, all executed within VMware Infrastructure 3.

A tale of two CPU architectures

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

AMD Opteron Intel XeonLast week I wrote about the VMware infrastructure that I’m trying to put in place. I mentioned that my testing has been based on HP ProLiant DL585 servers – each of these is equipped with four dual-core AMD Opteron 8xx CPUs and a stack of memory. Half of the initial infrastructure will use new DL585s and the intention is that implementing these servers will release some recently-purchased HP ProLiant DL580G3s for an expansion of the infrastructure. Because the DL580G3 uses an Intel Xeon MP (formerly codenamed Paxville MP) processor, the difference in processor families requires a separation of the servers into two resource pools; however that’s not the real issue. My problem is justifying to an organisation that until now has exclusively used Intel processors that AMD units provide (as my CTO puts it) “more bang for our buck”.

The trouble is that the press is full of reports on how the new Intel Xeon 51xx CPUs (formerly codenamed Woodcrest) out-perform AMD Opterons, where AMD has been in the lead until now; but that’s in the 2-processor server space and I’m not hearing much about 4-way units.

All of this may change tomorrow as, at today’s VMware Beyond Boundaries virtualisation roadshow, Richard Curran, Director of Marketing for the Intel (EMEA) Digital Enterprise Group, hinted about an impending announcement; however an HP representative expressed a view that any new CPU will just be to plug the gap for a few months – the real performance boost will come in a few months time with the next generation of dual-core multiprocessor chips (in the same way that the Xeon 50xx chips, formerly codenamed Dempsey, preceded the 51xx Woodcrest).

Leaving aside any other server vendors, I need some direction as to which 4-way server to buy from HP. HP ProLiant DL580G3s would allow me to standardise but the newer HP ProLiant DL580G4s are more powerful – using the Xeon 71xx chips (formerly codenamed Tulsa) with Intel VT virtualisation support – and, based on list price, are significantly less expensive. Meanwhile, HP’s website claims that ProLiant DL585s are “the best performing x86 4-processor server in the industry” and they cost slightly less than a comparably-specified DL580G4 (again, based on list price), even before taking into account their lower power consumption.

Speaking to Intel, they (somewhat arrogantly) disregarded any reason why I should chose AMD; however AMD were more rational, explaining that regardless of the latest Intel benchmarks, an Opteron is technologically superior for a two main reasons: the hypertransport connection between processor cores; and the integrated memory controller (cf. Intel’s approach of using large volumes of level 3 cache), although the current generation of Opterons only use DDR RAM. Crucially though, AMD’s next-generation dual-core Opterons are socket-compatible with the forthcoming quad-core CPUs (socket F) and are in the same thermal envelope – allowing for processor upgrades – as well as using DDR2 memory and providing AMD-V virtualisation support (but in any case I’ll need to wait a few months for the HP ProLiant DL585G2 before I can buy a socket F-based Opteron 8xxx rack server from HP).

As my virtualisation platform is based on VMware products, I asked VMware which processor architecture they have found to be most performant (especially as the Opteron 8xx does not provide hardware support for virtualisation; although there are doubts about whether ESX Server 3.0 is ready to use such technology – I have read some reports that there will be an upgrade later). Unsurprisingly, VMware are sitting on the fence and will not favour one processor vendor over another (both AMD and Intel are valued business partners for VMware); of course, such comparisons would be subjective anyway but I need to know that I’m making the right purchasing decision. So I asked HP. Again, no-one will give me a written opinion but two HP representatives have expressed similar views verbally – AMD is still in the lead for 4-way servers, at least for the next few months.

There are other considerations too – DL580s feature redundant RAM (after power and disk, memory is the next most likely component to fail and whilst ECC can guard against single-bit failures, double-bit failures are harder to manage); however because the memory controller is integrated in each CPU for an AMD Opteron, there is no redundant RAM for a DL585.

Another consideration is the application load – even virtualised CPUs are perform differently under different workloads: for heavily cached applications (e.g. Microsoft SQL Server or SAP), an Intel architecture may provide the best performance; meanwhile CPU and memory-intensive tasks (e.g. Microsoft Exchange) are more suited to an AMD architecture.

So it seems that it really is “horses for courses” – maybe a split resource pool is the answer with one pool for heavily cached applications and another for CPU and memory-intensive applications. What I really hope is that I don’t regret the decision to follow the AMD path in a few months time… they used to say that “nobody ever got fired for buying IBM“. These days it seems to be the same story for buying Intel.

UK white pages available online

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I don’t know how long this has been available but I just noticed a link on the BT website to the Phonebook – the UK’s residential and business white pages service. Many years back I used to have some sort of dial-up terminal access to a directory lookup system but I didn’t know it was now available on the Internet. Won’t find me though… I’m ex-directory!

VMware Beyond Boundaries virtualisation roadshow

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware Beyond Boundaries

It’s conference season and I’ll be missing the European Microsoft TechEd IT Forum this year for two reasons: firstly, it clashes with my son’s birthday; secondly, it’s in Barcelona, and last time I attended a TechEd there I found it to be less-well organised than conferences in other European cities (e.g. Amsterdam). There’s also a third reason – it’s unlikely I’ll get budget approval to attend – but the reasons I’ve already given mean I won’t even be trying!

Given my current work commitments, one conference for which I should find it reasonably easy to justify is VMware’s VMworld; however, nice though a trip to Los Angeles might be (actually, there are other American cities that I’d rather visit), I’m expecting to have gained a new son or daughter a few weeks previously and leaving my wife home alone with a toddler and a newborn baby for a week might not be considered a very good idea. With that in mind I was glad to attend the VMware Beyond Boundaries virtualisation roadshow today at London’s Excel conference centre – VMware’s first UK symposium with 500 attendees and 27 sponsors – a sort of mini-VMworld.

Whilst I was a little annoyed at having arrived in time for the first session at 09:30 and VMware apparently being in no hurry to kick off the proceedings, it was a worthwhile day, with presentations on trends in virtualisation and increasing efficiency through virtualisation; live demos of VMware Infrastructure 3; panel discussions on realising business benefits from a virtualisation strategy and recognising when and how to virtualise; real life virtualisation experiences from BT and Nationwide; and a trade show with opportunities to meet with a number of hardware vendors and ISVs.

I’ll post some more about the most interesting sessions, but what follows is a summary of the key messages from the event.

One feature in the introduction session was a video with a bunch of children describing what they thought virtualisation might mean. Two of the quotes that I found particularly interesting were “virtual kind of means to me that you put on a helmet and then you’re in a different world” and the idea that I might “use it to get money and do [my] homework”. Actually, neither of those quotes are too far from the truth.

Taking the first quote, virtualisation is a different world – it’s a paradigm shift from the distributed server operations model that we’re used to in the “WIntel” space – maybe not so radical for those from a mid-range or mainframe server background, but a new style of operations for many support teams. As for the second quote – it is possible to save money through server consolidation which leads to savings in hardware expenditure as well as reduced power and heat requirements (one CPU at a higher utilisation uses less power than several lightly-loaded units) and consolidation (through virtualsiation) also allows organisations to unlock the potential in underutilised servers and get their “homework” done.

Indeed, according to IBM‘s Tikiri Wanduragala, server consolidation is the driver behind most virtualisation projects as organisations try to get more out of their hardware investment, making the most of computing “horsepower” and looking at metrics such as servers per square inch or servers per operator. Realising cost savings is the justification for the consolidation exercise and virtualisation is the enabling technology but as IDC‘s Thomas Meyer commented, he doubted that a conference room would have been filled had the event be billed as a server consolidation event, rather than a server virtualisation one. Wanduragala highlighted other benefits too – virtualsiation is introducing standard by the back door as organisations fight to minimise differences between servers, automate operations and ultimately reduce cost.

Interestingly, for a spokesman from a company whose current marketing message seems to be all about high performance and who is due to launch a (faster) new chip for 4-way servers tomorrow, Intel‘s Richard Curran says that performance per Watt is not the single issue here – organisations also want reliability, and additional features and functionality (e.g. the ability to shut down parts of a server that are not in use), whilst Dell‘s Jeffrey Wartgow points out that virtualisation is more than just a product – it’s a new architecture that impacts on many areas of business. It also brings new problems – like virtual server proliferation – and so new IT policy requirements.

Somewhat predictably for an organisation that has been around since the early days of computing, IBM’s response is that the reactive style of managing management console alerts for PC servers has to be replaced with predictive systems management, more akin to that used in managing mid-range and mainframe servers.

Of course, not every organisation is ready to embrace virtualisation (although IDC claim that 2006 is the year or virtualisation, with 2.1 million virtual servers being deployed, compared with 7 million physical server shipments; and 46% of Global 2000 companies are deploying virtualisation technologies [Forrester Research]). Intel cited the following issues to be resolved in pushing virtualisation projects through:

  • Internal politics, with departments claiming IT real estate (“my application”, “my server”, “my storage”).
  • Skills – getting up to speed with new technologies and new methods (e.g. operations teams that are wedded to spreadsheets of server configuration information find it difficult to cope with dynamically shifting resources as virtual servers are automatically moved to alternative hosts).
  • Justifying indirect cost savings and expressing a total cost of ownership figure.

IDC’s figures back this up with the most significant hurdles in their research being:

  • Institutional resistance (25%).
  • Cost (17%).
  • Lack of technical experience (16%).

The internal politics/institutional resistance issue is one of the most significant barriers to virtualisation deployment. As Tikiri Wanduragala highlighted, often the line of business units hold their own budgets and want to see “their machine” – the answer is to generate new business charging models that reflect the reduced costs in operating a virtual infrastructure. Intel see this as being reflected in the boardroom, where IT Directors are viewed with suspicion as they ask for infrastructure budgets – the answer is the delivery of IT as a service – virtualisation is one shared service infrastructure that can support that model, as Thomas Meyer tagged it, a service oriented infrastructure to work hand in hand with a service oriented architecture.

For many organisations, virtualisation is fast becoming the preferred approach for server deployment, with physical servers being reserved for applications and hardware that are not suited to a virtual platform. On the desktop, virtualisation is taking off more slowly as users have an emotional attachment to their device. HP‘s Iain Stephen noted that there are two main technologies to assist with regaining control of the desktop – the first is client blades (although he did concede that the technology probably hit the market two years too late) and the second is virtual desktops. Fujitsu-Siemens Computers‘ Christophe Lindemann added that client blades simply take the desktop off the desk is not enough – the same management issues remain – and that although many organisations have implemented thin client (terminal server) technology, that too has its limitations.

Microsoft’s dynamic systems initiative, HP’s adaptive infrastructure, Dell’s scalable enterprise, IBM’s autonomic computing, Fujitsu-Siemens Computers’ dynamic data centre and IDC’s dynamic IT are all effectively about the same thing – as HP put it “[to] deliver an integrated architecture that helps you move from high-cost IT islands to lower-cost shared IT assets”. No longer confined to test and development environments, virtualsiation is a key enabler for the vision of providing a shared-service infrastructure. According to IDC, 50% of virtual machines are running production-level applications, including business-critical workloads; and 45% of all planned deployments are seen as virtualisation candidates. It’s not just Windows servers that are being virtualised – Linux and Unix servers can be virtualised too – and ISV support is improving – VMware’s Raghu Raghuram claims that 70% of the top ISVs support software deployed in a virtual environment.

Looking at server trends, the majority of servers (62.4%) are have a rack-mount form factor with a significant proportion (26.7%) being shipped as blades and pedestal/tower servers being very much in the minority [source: IDC]. Most servers procured for virtualisation are 2- or 4-way boxes [source: IDC] (although not specifically mentioned, it should also be noted that the VMware licensing model, which works on the basis of pairs of physical processors, lends itself well to dual-core and the forthcoming multi-core processors).

Virtualisation changes business models – quoting Meyer “it is a disruptive technology in a positive sense” – requiring a new approach to capacity planning and a rethink around the allocation of infrastructure and other IT costs; however it is also a great vehicle to increase operational efficiencies, passing innovation back to business units, allowing customers to meet emerging compliance rules and to meet business continuity requirements whilst increasing hardware utilisation.

Summarising the day, VMware’s Regional Director for the UK and Ireland, Chris Hammans, highlighted that virtual infrastructure is rapidly being adopted, the IT industry is supporting virtual infrastructure and the dynamic data centre is here today.

Mobile working… without any devices

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Today is not a good day. It’s a fairly normal Tuesday – up at 05:00, leave the house at 05:30 to avoid the traffic and be in the office in London (Docklands) by about 07:00; except that I was hungry, it took 15 minutes to get served at the MacDonalds drive thru’ (call that fast food?) and now I’m at my desk I’ve found that I left my notebook PC at home. Arghhhhh!

I hadn’t realised before that I can’t work without my notebook PC. This office doesn’t have any general use desktop PCs – just hot desks for mobile/notebook users; and my data isn’t on the network either – it’s on my PC and backed up to DVD and external hard disks at home. I feel like I’ve lost a limb (well, if I had really lost a limb I’m sure it would be much, much worse, but I’m sure you get my drift).

I can’t go home to pick it up because south-east England will be snarled up with traffic now, making it a 4-hour round trip (and I have a meeting at 10:30). Luckily, I’ve managed to borrow a notebook from one of the guys in the office for a few hours.

So, for the next 3 hours it’s Microsoft Exchange via Outlook Web Access and picking out tasks that don’t require access to my existing data. Then, after my meeting, I can make the 160-mile round trip to retrieve my lost limb and pick up my work where I left off previously before making my way to the hotel and this evening’s appointments. Now, what was I saying about not trusting Web 2.0 sites to hold my data? Alternatively, maybe I should start to work from home 5 days a week instead of just 2…

An interesting look at the connotations of colour

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m working on a feature for a local community magazine and one of the items that’s become quite significant is the choice of colours for the artwork. Alex sent me a link to the color in motion site, which describes itself as an animated and interactive experience of colour communication and colour symbolism. The use of colour is equally applicable to print and web work and this site is worth a look if you have five minutes – in my opinion, it’s one of the few sites where the exclusive use of animation is worthwhile.

Changing the iPhoto library location

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Apple iPhoto is one of the iLife applications that ships with Mac OS X to facilitate importing, organising, editing and sharing digital photos. I use Adobe Photoshop for my digital photo work but the integration of iPhoto with Apple Front Row was enough to make me want to look at iPhoto a bit more closely.

By default, iPhoto copies digital photos to a new location in order to work on them, leaving the originals intact (sounds like a good idea to me) but because Mac upgrades are horrendously expensive, my Mac Mini only has an 80GB hard disk (and I take a lot of photos) and I keep my data on a 320GB external hard disk. Unfortunately, there’s nowhere in the application preferences to set the library location but I did find a way around this. By deleting the existing iPhoto library and launching the application, I was prompted to create a new library:

Apple iPhoto

Then, selecting a location on my external hard disk allowed me to set up a new library exactly where I wanted it.

Working with Nikon raw (.NEF) images

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Ever since I started taking photos digitally, I’ve been meaning to investigate the use of raw image capture as it offers much more flexibility for off-camera editing. Basically, a raw image is the unprocessed data from the camera sensor which most cameras then process to produce a JPEG image; however because image sensors vary, so do raw image formats. Thankfully, Nikon’s .NEF format is one of the common ones.

I wrote a post last year about the Microsoft raw image thumbnailer and viewer for Windows XP but I’m still shooting JPEGs as most of my photography these days is family snapshots. Meanwhile, I encouraged Stuart to buy a Nikon D50 digital SLR and he recently posted some information about digital camera raw support for .NEF files in Adobe Photoshop CS 2 that may come in handy one day.