I’m not normally that bothered about iTunes updates, but iTunes 7 is a big improvement.
Two of the new features are gapless playback (thank you Apple) – touted as a big boon for classical music lovers but also pretty good for people like me who listen to dance mixes (I may be a 34 year-old family man but there’s an Ibizan clubber trying to escape from inside me) – and automatic download of album art, including a cover browser view to flip through albums jukebox style.
Unfortunately the iTunes album art service obviously has some holes, because most of my collection is still lacking album art. Short of scanning CD inlays and applying the artwork to the tracks manually, there’s not a great deal that can be done, but Mac OS X 10.4 (Tiger) users can make use of an Amazon Album Art Widget.
This handy utility searches Amazon‘s Austrian, Canadian, French, German, Japanese, UK or US sites and finds one or more matches for the currently playing song, which than then be applied as artwork for selected tracks, the currently playing track or the currently playing album. As most of my CDs are from UK or Australian sources (and I suspect iTunes is very US focused) this is doing a great job of filling in the gaps, even if the quality of Amazon’s album art sometimes leaves a bit to be desired. Of course, much of my collection will have long since been deleted from catalogues, but I guess I’m now getting up towards the 80% mark on artwork completeness, which vastly improves the view of my cover browser.
I recently switched my primary home computer to a Mac but I also use Windows and Linux. I don’t consider myself to be a member of the Mac community, or the Linux community, or the Windows community – because (based on many forum posts and blog comments that I read) all of these “communities” are full of people with bigoted views that generally boil down to “my OS is better than your OS” or “Duh… but why would you want to use that?”.
Based largely on Apple’s advertising though, one of the things that I did assume with Mac OS X was that I’d be secure by default. Nope. It turns out that’s not true as there is an obscure flaw in Mac OS X (surely not?!) whereby a malformed installer package can elevate its privileges in Mac OS X and become root. After running Windows for 16 years I’m used to these sort of flaws but surely His Jobsness’ wonderful creation is above such things!
Frankly I don’t care that Mac OS X is flawed. So is Linux. So is Windows. So is anything with many millions of lines of code – open or closed source – but I thought better of Apple because I believed that they would keep me safe by default. It’s well known that running Windows XP as anything less than a Power User is difficult and that’s one of the many improvements in Windows Vista. All the Linux installers that I’ve used recently suggested that I create a non-root user as well as root but the OS X installer is happy for me to breeze along and create a single administrator account without a word of further advice. I appreciate that an OS X administrator is not equal to root but nevertheless it’s a higher level of access than should be used for daily computing and because I didn’t know any better (I’m just a dumb switcher) I didn’t create a standard user account (until today).
I read a lot of Mac and Linux zealots singing the praises of their operating systems and saying how Windoze is a haven for spyware and viruses. Well, it’s time to wake up and smell the coffee – as Mac OS X gains in popularity (I heard something about the new MacBooks having a 12% share of all new laptop sales recently) then Mac users will have to start thinking about spyware, viruses and the like. Now is the time to practice safe computing – whatever the operating system – with most users running as administrators then that could quickly become a major issue.
I once worked on a customer project where the outgoing IT Director (for whom this was to be his swansong) wanted the project to be named Cuba (a rival organisation’s project was Havana so this had to be bigger – what’s bigger than the city… the whole island!). This was later given an associated acronym of Common User Based Architecture – it was basically their Year 2000 desktop and server upgrade project – but we knew it the S**** H****** Information Technology Experience (where SH was the IT Director’s initials). Then a few weeks back, I was working with a guy who turned out to be working on the same customer account at the same time but for a competitor, who apparently referred to Project Cuba as Can’t Upgrade Bugger All!
Another colleague recalled some similarly dubious acronyms – he once had to support a system called Customer Requests And Problems and a common term used in one of his previous organisations was Failed Under Continuous Testing.
I’d love to hear from anybody else with poorly named projects/products – leave a comment on this post if you have some you’d like to share!
A few weeks back, we had new windows and doors fitted to our house. As no-one could ever find the old doorbell and I didn’t want to drill holes in the frame of our pristine new plastic to mount the button in a more obvious location, I picked up a wireless doorchime kit from B&Q yesterday.
Our new doorbell is set to play a simple “ding dong” but sometimes it’s been playing a “Westminster” chime, seemingly randomly. Very strange. I had been concerned that we could experience interference either with or from our wireless network, DECT phones, microwave oven (hopefully not) or baby monitor but everything else seems to be working as expected.
One possible cause for concern was that the instructions said that the device was not suitable for PVCu doors because the metal inside the door would affect the signal range. Well, clearly it works from across the street as it turns out that a neighbour’s button is setting off our chime! Strangely, our button doesn’t seem to set off their chime but thankfully I can change the frequency that we are using – and I thought keeping up a reliable WiFi connection from the office to the living room was hard enough.
Just give me a length of copper cable… at least I know where that starts and ends!
I was flicking through the copy of IT Week that arrived on my doormat a few minutes ago and Martin Courtney’s article on unwanted but serviceable IT equipment (eBay rejects worthless WiFi) struck a chord with me (as someone who has many items of old IT equipment in the garage).
Anyone prepared to make me an offer for a couple of old PCs, a 14″ CRT monitor, some 802.11b WiFi kit, 512MB of nearly new RAM from a Mac Mini, a perfectly good HP iPAQ that never gets used, or an APC UPS in need of a new battery? No? Thought not.
su – cd RPMs
rpm -ivh *.rpm cd desktop-integration/ rpm -ivh openoffice.org-redhat-menus-2.0.3-2.noarch.rpm
After logging out and in again (or by starting a new GUI instance from another console session using startx — :1), the icons should appear on the GNOME Applications menu (in the Office group). Note the use of the Red Hat desktop integration, which, perhaps unsurprisingly, seems to be fine on Fedora too.
It’s no secret that I’m no fan of Java applications, but its also a necessary evil that I generally need to have installed on my PC. I had a few problems getting it working on my Linux (Fedora Core 5) PC though – this is what I had to do.
The Unofficial Fedora FAQ got me started; however as I didn’t want the whole Java development kit (JDK) installed – just the Java runtime environment (JRE) I downloaded the RPM installer from the Sun Java download site.
I’m not sure why the link is from the mozilla plugins folder, not from /usr/lib/firefox-126.96.36.199/plugins/ (as I would have expected from a LinuxQuestions.org forum post on the subject) but after a browser restart, I was able to successfully test the Java installation, which was correctly identified as Sun Microsystems Inc. Java version 1.5.0_08 on Linux OS version 2.6.17-1.2174_FC5.
Over the last few years, I’ve heard a lot about blade servers – mostly anecdotal – and mostly commenting that they produce a lot of heat so for all the rack space saved through their use, a lot of empty space needs to be left in datacentres that weren’t designed for today’s high-density servers. In recent weeks I’ve attended a number of events where HP was showcasing their new c-class blade system and it does look as if HP have addressed some of the issues with earlier systems. I’m actually pretty impressed – to the point where I’d seriously consider their use.
HP’s c-class blade system was introduced in June 2006 and new models are gradually coming on stream as HP replaces the earlier p-class blades, sold since 2002 (and expected to be retired in 2007).
The new c-class enclosure requires 10U of rackspace, which can be configured with up to 16 half-height server blades, 8 full-height server blades or a combination of the two. Next week, HP will launch direct-attached storage blades (as well as new server blades) and next year, they expect to launch shared storage blades (similar to the StorageWorks Modular Storage Array products). With the ability to connect multiple blade enclosures to increase capacity, an extremely flexible (and efficient) computing resource pool can be created.
At the back of the enclosure are up to 10 cooling fans, 6 power connections (three-phase connections are also available), 1 or 2 management modules, and up to 8 interconnect modules (e.g. Ethernet or fibre-channel pass-through modules or switches from HP, Cisco, Brocade and others).
There are a number of fundamental changes between the p-class and c-class blade systems. Immediately apparent is that each blade is physically smaller. This has been facilitated by moving all cooling fans off the blade itself and into the enclosure, as well as by the move from Ultra320 SCSI hard disks to small form-factor serial-attached SCSI (SAS) hard disks. Although the SAS disks currently have reduced storage capacity (compared with Ultra320 SCSI), a 146GB 10,000RPM SAS disk will be launched next week and 15,000RPM disks will be released in the coming months. Serial ATA (SATA) disks are also available, but not recommended for 24×7 operation. The new disks use a 2.5″ form factor and weigh significantly less than 3.5″ disks; consequently they require about half as much power to provide equivalent performance.
HP are keen to point out that the new cooling arrangements are highly efficient, with three separate airflows through the enclosure for cooling blades, power supplies and communications devices. Using a parallel, redundant and scalable (PARSEC) architecture, the airflows include back-flow preventers and shut-off doors such that if a component is not installed, then that part of the enclosure is not cooled. If the Thermal Logic control algorithm detects that management information is not available (e.g. if the onboard management module is removed) then the variable speed Active Cool fans will fail open and automatically switch to full power – it really is impressive to see just how much air is pulled through the system by the fans, which are not dissimilar to tiny jet engines!
Power is another area where improvements have been made and instead of using a separate power supply modele, hotswap power supply units are now integrated into the front of the enclosure. The Thermal Logic system will dynamically adjust power and cooling to meet energy budgets such that instead of running multiple supplies at reduced power, some supplies are run close to full capacity (hence more efficiently) whilst others are not used. If one power supply fails, then the others will take up the load, with switching taking around 1ms.
Each blade server is a fully-functional HP ProLiant industry standard server – in fact the BL model numbering system mirrors the ML and DL range, adding 100 to the DL number, so a BL480c blade is equivalent to a DL380 rack-mount server (which itself adds 10 to the ML number – in this case an ML370).
Looking inside a blade, it becomes apparent how much space in a traditional server is taken up by power and cooling requirements – apart from the disks at the front, most of the unit consists of the main board with CPUs and memory. A mezzanine card arrangement is used to provide network or fibre-channel ports, which are connected via HP’s Virtual Connect architecture to the interconnect modules at the rear of the enclosure. This is the main restriction with a blade server – if PCI devices need to be employed, then traditional servers will be required; however each half-height blade can accommodate two mezzanine cards (each up to 2 fibre-channel or 4 Gigabit Ethernet ports) and a full-height blade can accommodate three mezzanine cards. Half-height blades also include 2 network connections as standard and full-height have 4 network connections – more than enough connectivity for most purposes. Each blade has between 2 and 4 hard disks and the direct attached storage blade will provide an additional 6 drives (SAS or SATA) in a half-height blade.
One of the advantages of using servers from tier 1 OEMs has always been the management functionality that’s built in (for years I argued that Compaq ProLiant servers cost more to buy but had a lower overall cost of ownership compared with other manufacturer’s servers) and HP are positioning the new blades in a similar way – that of reducing the total cost of ownership (even if the initial purchase price is slightly higher). Management features included within the blade include the onboard administrator console, with a HP Insight display at the front of the enclosure and up to two management modules at the rear in an active-standby configuration. The insight display is based on technology from HP printers and includes a chat function, e.g. for a remote administrator to send instructions to an engineer (predefined responses can be set or the engineer can respond in free text, but with just up, down and enter buttons it would take a considerable time to do so – worse than sending a text message on a mobile phone!).
Each server blade has an integrated lights out (iLO2) module, which is channelled via the onboard administrator console to allow remote management of the entire blade enclosure or the components within it – including real-time power and cooling control, device health and configuration (e.g. port mapping from blades to interconnect modules), and access to the iLO2 modules (console access via iLO2 seems much more responsive than previous generations, largely due to the removal of much of the Java technology). As with ML and DL ProLiant servers, each blade server includes the ProLiant Essentials foundation pack – part of which is the HP Systems Insight Manager toolset; with further packs building on this to provide additional functionality, such as rapid deployment, virtual machine management or server migration.
The Virtual Connect architecture between the blades and the interconnect modules removes much of the cabling associated with traditional servers. Offering a massive 5Tbps of bandwidth, the backplane needs to suffer four catastrophic failures before a port will become unavailable. In addition, it allows for hot spare blades to be provisioned, such that if one fails, then the network connections (along with MAC addresses, worldwide port numbers and fibre-channel boot parameters) are automatically re-routed to a spare that can be brought online – a technique known as server personality migration.
In terms of the break-even point for cost comparisons between blades and traditional servers, HP claim that it is between 3 and 8 blades, depending on the connectivity options (i.e. less than half an enclosure). They also point out that because the blade enclosure also includes connectivity then its not just server costs that need to be compared – the blade enclosure also replaces other parts of the IT infrastructure.
Of course, all of this relates to HP’s c-class blades and it’s still possible to purchase the old p-class HP blades, which use a totally different architecture. Other OEMs (e.g. Dell and IBM) also produce blade systems and I’d really like to see a universal enclosure that works with any OEM’s blade – in the same way that I can install any standard rack-format equipment in (almost) any rack today. Unfortunately, I can’t see that happening any time soon…
A few weeks back I had the opportunity to attend a presentation on application virtualisation using the Softricity SoftGrid platform. Server virtualisation is something I’m becoming increasingly familiar with, but application virtualisation is something else entirely and I have to say I was very impressed with what I saw. Clearly I’m not alone, as Microsoft has acquired Softricity in order to add application virtualisation and streaming to its IT management portfolio.
The basic principal of application virtualisation is that, instead of installing an application, an application runtime environment is created which can roam with a user. The application is packaged via a sequencer (similar to the process of creating an MSI application installer) and broken into feature blocks which can be delivered on demand. In this way the most-used functionality can be provided quickly, with additional functionality in another feature block only loaded as required. When I first heard of this I was initially concerned about the potential network bandwidth required for streaming applications; however feature blocks are cached locally and it is also possible to pre-load the cache (either from a CD or using tools such as Microsoft Systems Management Server).
From a client perspective, a user’s group membership is checked at login time and appropriate application icons are delivered, either from cache, or by using the real-time streaming protocol (RTSP) to pull feature blocks from the server component. The virtual application server uses an SQL database and Active Directory to control access to applications based on group membership and this group model can be used to stage rollout of an application (again, reducing the impact on the network avoiding situations where many users download a new version of an application at the same time).
Not all applications are suitable for virtualisation. For example, very large applications used throughout an organisation (e.g. Microsoft Office) may be better left in the base workstation build; however the majority of applications are ideal for virtualisation. The main reason not to virtualise are if an application provides shell integration that might negatively impact upon another application if it is not present – for example the ability to send an e-mail from within an application may depend on Microsoft Outlook being present and configured.
One advantage of the virtualised application approach is that the operating system is not “dirtied” – because each package includes a virtual registry and a virtual file system (which run on top of the traditional “physical” registry and file system), resolving application issues is often a case of resetting the cache. This approach also makes it possible to run multiple versions of an application side-by-side – for example testing a new version of an application alongside the existing production version. Application virtualisation also has major advantages around reducing rollout time.
The Microsoft acquisition of Softricity has so far brought a number of benefits including the inclusion of the ZeroTouch web interface for self-provisioning of applications within the core product and reduction of client pricing. There is no server pricing element, making this a very cost effective solution – especially for Microsoft volume license customers.
Management of the solution is achieved via a web service, allowing the use of the SoftGrid Management Console or third party systems management products. SoftGrid includes features for policy-based management and application deployment as well as usage tracking and compliance.
I’ve really just skimmed the surface here but I find the concept very interesting and can’t help but feel the Microsoft acquisition will either propel this technology into the mainstream (or possibly kill it off forever – less likely). In the meantime, there’s a lot of information available on the Softricity website.
Leo Laporte’s TWiT Podcast Network has some really good podcasts including This Week in Tech (TWiT). More recently, the network has launched MacBreak Weekly and (I understand) will soon launch a Windows Podcast hosted by Paul Thurrott. Of course, some of the information is subjective and must be taken with pinch of salt – it can also be very US-centric (this is helped when there are European guests, e.g. Wil Harris from Bit-Tech); however MacBreak Weekly annoyed me greatly as I caught up on a few podcasts over the last couple of days.
Whilst discussing the Mac Pro (in episode 3) there was comment about a lot of software not being optimised for multi-processor configurations and the reply came back (and I quote) “you mean that Apple actually built a computer that’s ahead of its time?”. No! I can accept that Apple may well have built a computer that offers more processing power than many users could use; and in Apple’s credit all Macs now have at least two processor cores (except any Core Solo Mac Minis that are still being sold – I think even they have hyperthreading) but both the other major PC operating system platforms (Linux and Windows) have supported multi-processor machines for some time now – if Mac OS X is not able to make full use of the machines then that’s a fault of the operating system designers at Apple and they need to get up to speed – quickly. I’m no developer but the need to rewrite applications to run on an Intel platform instead of the older PowerPC architecture was given as the reason for the distraction preventing writing applications to use the available processing power. That doesn’t stack up to me. I am casting my mind back 15 years now but I seem to remember from the operating systems internals module that made up a part of my degree that it is the task of the operating system scheduler to assign processor time to execution threads – so that’s Mac OS X then, not the applications. Of course if the applications aren’t threaded then there’s not much that the operating system can do about it, but even applications for a single processor should be multithreaded. Shouldn’t they?
They then went on to talk about striping four 750GB SATA drives to give 3TB of “super fast storage performance”. Hmm… it sounds very risky to me. SATA drives are okay for PC use but not designed for 24×7 operation; however, regardless of the disk hardware in use, RAID 0 (striping) offers no fault tolerance at all. Zero. Nada. RAID 5 and 6 would work (but are a bit slow for writing) and would reduce the available disk space to about 2.25TB. If the Mac Pro’s RAID controller supports it, the safest solution (whilst remaining performant) would be RAID 1+0 giving 1.5TB of usable space, mirrored across two disks and then striped. RAID 0 might be fast but you’d better hope it’s being backed up somewhere else and 3TB backups are not very easy to manage!
In another section, the panel was amused that PC Magazine would cover a story about why 91% of Mac users are satisfied with their product (as Apple tops a user satisfaction survey). Get over it – Mac OS X is just an operating system and Macs are personal computers (they always have been). If that’s a bit literal then Intel Macs are definitely just PCs! Now, if Windows XP Magazine or Linux Format had covered this story then I could understand the amusement, but PC Magazine and Personal Computer World should be covering Linux, Mac OS and Windows stories (in my experience Personal Computer World magazine certainly does) as well as those relating to any other operating systems that run on PC hardware.
In one episode the guys were suggesting that there is no reason to buy a PC as a Mac can do it all, making it a better PC than a PC… but hang on guys – previously you were making a distinction between Macs and PCs – you can’t have it both ways! And as much as I love Apple hardware, a black MacBook sounds pretty expensive to me. Even the MacPress has commented that other PC manufacturers have been making black notebook PCs for many years now (and they don’t charge a Â£90 premium to have it in black). I’d love a MacBook but my IBM ThinkPad is still my favourite (and best built) notebook PC.
Another item that riled me was a comment that Macs have 5% of the PC market share but not 5% of the viruses – duh! Hackers, virus writers, and other miscreants like kudos. No-one gets kudos for writing a virus on something obscure, but as the Mac gains a greater market share it will be the target for more malware – especially as the MacPress continues to stress that Windows on a Mac is subject to the same security concerns as Windows on any other PC (true) whilst stressing that running OS X on a Mac is safe (misleading… and unlikely to remain true indefinitely). All PC users should practice safe computing, regardless of the operating system.
In all, MacBreak Weekly disappointed me with a general Mac-elitist view. Sure, I recently switched to using a Mac, but I run other OSs too (I’m writing this on my Fedora notebook). Mac OS X is good at some things, Linux is good at others and, believe it or not, Windows is good at some things too; Windows Vista and Windows Longhorn Server may be running late but Windows Server 2003 is still a great server OS. The trouble is that there are still too many “my OS is better than your OS” discussions.