Tag: Useful Websites

  • Running Red Hat Enterprise Linux without a subscription

    I’ve written previously about why open source software is not really free (as in monetary value), just free (as in freedom). Companies such as Red Hat and Novell (SUSE) make their money from support and during Red Hat Enterprise Linux (RHEL) setup, it is “strongly recommended” that the system is set up for software updates via Red Hat Network (RHN), citing the benefits of an RHEL subscription as:

    • “Security and updates: receive the latest software updates, including security updates, keeping [a] Red Hat Enterprise Linux system updated and secure.
    • Downloads and upgrades: download installation images for Red Hat Enterprise Linux releases, including new releases.
    • Support: Access to the technical support experts at Red Hat or Red Hat’s partners for help with any issues you might encounter with [a] system.
    • Compliance: Stay in compliance with your subscription agreement and manage subscriptions for systems connected to [an] account at http://rhn.redhat.com/

    You will not be able to take advantage of these subscriptions privileges without connecting [a] system to Red Hat Network.”

    Red Hat Enterprise Linux 5 installer

    Take a look at Red Hat Enterprise Linux (RHEL) and you’ll see that it’s actually quite expensive – a standard subscription for a machine with up to 2 processor sockets including 1 year’s 12×5 telephone support, 1 year of web access and unlimited incidents is €773.19 [source: Red Hat Online Shop, Europe]. That is not something that I can afford and even though Red Hat gave me a copy of RHEL 5 as part of my recent training, it only includes a 30-day subscription. Now they have launched Red Hat Exchange – a new service whereby third party open source software solutions are purchased, delivered and supported via a single, standardized Red Hat subscription agreement with consolidated billing covering the complete application stack. It’s a great idea, but the pricing for some of the packages makes using proprietary alternatives seem quite competitive.

    In fairness to Red Hat, they sponsor the Fedora Project for users like me, who could probably make do with a community-supported release (Fedora is free for anyone to use modify and distribute) but there is another option – CentOS (the community enterprise operating system), which claims to be:

    “An Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. CentOS conforms fully with the upstream vendor[‘]s redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) CentOS is free.”

    Hmm… so which North American Enterprise Linux vendor might that be then ;-)

    So what about RHEL systems for which the subscription has expired? I’m not sure what the legal standpoint is but there is a way to receive updated software using an unregistered copy of RHEL. Firstly, configuring additional repositories like Dag Wieer’s RPMForgethere are even RPMs available to set up the correct repository! Then, there are the various RPM search sites on the ‘net, including:

    I’ve found that using these, even if there is not an appropriate RHEL or generic RPM available, there is often a CentOS RPM (which often still carries the el5 identifier in the filename). These should be safe to install on an RHEL system and in those rare cases when a bleeding edge package is required, there may well be a Fedora version that can be used. So it seems that I can continue to run a Linux distribution that is recognised by most software vendors, even when my RHN subscription expires.

  • Adding a meaningful description to web pages

    One of the things that I noticed whilst reviewing the Google results for this site, was how the description for every page was shown using the first text available on the page – mostly the alternative text for the masthead photo (“Winter market scene from the small town of Porjus in northern Sweden – photograph by Andreas Viklund, edited by Alex Coles.”):

    Screenshot showing duplicate descriptions

    Clearly, that’s not very descriptive and so it won’t help much with people finding my site, linking to me, and ultimately improving the search engine placement for my pages, so I need to get a decent description listed for each page.

    The WordPress documentation includes a page on meta tags in WordPress, including an explanation as to why they aren’t implemented by default (my template did include a meta description for each page which included the weblog title and tagline though). Even though meta tags are not a magic solution to search engine placement, I wanted to find a way to add a meaningful description for each page using <meta description="descriptionofcontent"> and also <meta keywords="pagecontext"> (although it should be noted that much of the available advice indicates that major search engines ignore this due to abuse). Fortunately there is a WordPress plugin which is designed to make those changes – George Notoras’ Add-Meta-Tags. There’s plenty of speculation as to whether or not Google actually uses the description meta tag but recent advice seems to indicate that it is one of many factors involved in the description shown in search results (although it will not actually affect positioning).

    I already had meta tags in place for content-type, robots, and geolocation but I added some more that I was previously using HTML comments for:

    <meta http-equiv="content-language" content="en-gb" />
    <meta name="author" content="Mark Wilson" />
    <meta name="generator" content="WordPress" />
    <meta name="publisher" content="markwilson.it" />
    <meta name="contact" content="webmaster@markwilson.co.uk" />
    <meta name="copyright" content="This work is licenced under the Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales License. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-sa/2.0/uk/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA" />

    Incidentally, a comprehensive list of meta tags and an associated FAQ is available at Andrew Daviel’s Vancouver webpages.

    After checking back a couple of weeks later, the same search returns something far more useful:

    Screenshot showing duplicate descriptions

    Unfortunately my PageRank has dropped too, and it’s possible that the duplicate entries for http://www.markwilson.co.uk/ and https://www.markwilson.co.uk/blog/ are causing the site to be penalisedGoogle’s Webmaster guidelines say “don’t create multiple pages, subdomains, or domains with substantially duplicate content. The presence of those duplicate entries is actually a little odd as checking the server headers for http://www.markwilson.co.uk/ reveals an HTTP 301 response (moved permanently), redirecting to https://www.markwilson.co.uk/blog/.  Of course, it could be down to something entirely different, as PageRank is updated infrequently (there’s more information and links to some PageRank anaylsis tools at RSS Pieces but I use Page Rank Checker) and there have been a lot of changes to this site of late… only time (and building the volume of backlinks to https://www.markwilson.co.uk/blog/) will tell.

  • Defragmenting a Mac OS X hard disk

    Apple claims that OS X is the world’s most advanced operating system. If that’s the case, then why does it lack basic system utilities? That’s a rhetorical question, but I’ve written before about OS X’s lack of a decent backup utility and today (including most of tonight – hence the time of this post) I fell foul of its inability to defragment hard disks.

    “ah… but you don’t need a defragmentation utility with OS X because it automatically defragments as it goes.”

    [insert name of just about any Macintosh support forum here]

    Wrong.

    OS X defragments files, but not the disk itself (for an explaination as to what that really means and as to whether it’s really necessary, refer to Randy B Singer’s Mac OS X maintenance and troubleshooting guide). This inability to perform what should be a basic operating system function (even Windows has the capability) has cost me a lot of time today. In fairness, there is a third party utility availabilty (if I was prepared to pay for it), called iDefrag (Paul Stamatiou has a review of iDefrag on his site) but in the end, I used Mike Bombich’s Carbon Copy Cloner to clone my hard disk to my backup drive, make that bootable, repartition my system disk, and then clone the drive back again – a pretty long winded approach to defragmentation.

    Still, every cloud has a silver lining… at least this process led me to discover the the Mac OS X maintenance and troubleshooting guide that I referred to earlier… well worth a read.

  • The elements of meaningful XHTML

    I’m really trying to use good, semantic, XHTML and CSS on this website but sometimes it’s hard work. Even so, the validation tools that I’ve used have helped me to increase my XHTML knowledge and most things can be tweaked – I’m really pleased that this page current validates as both valid XHTML 1.1 and CSS2.

    Last night I came across an interesting presentation by Tantek Çelik (of box model hack fame) that dates back to the 2005 South by SouthWest (SxSW) interactive festival and discusses the elements of meaningful XHTML. Even though the slidedeck is no substitute for hearing the original presentation, I think it’s worth a look for a few reasons:

    • It taught me about some XHTML elements that I wasn’t familiar with (e.g. <address>) and others I’m just getting to grips with (e.g. <cite>).
    • It highlighted some techniques which abuse the intended meaning for XHTML elements and how the same result should be achieved using semantically correct XHTML.
    • It introduced me to extending XHTML with microformats for linked licenses, social relationships, people, events, outlines and even presentations (thanks to the links provided by Creative Commons and the XHTML Friends Network, I already use linked licenses and social relationships on this site but now I understand the code a little better).
    • It reinforced that I’m doing the right thing!
  • Publishing WordPress content on the mobile web

    A few nights back, I was reading a .net magazine article about developing websites enabled for mobile content.

    As my blog is written primarily for technical people, it seems logical to assume that a reasonable proportion of its readers could make use of access from a mobile device, especially as the magazine article’s author, Brian Fling, believes that:

    “[the mobile web] will revolutionize the way we gather and interact with information in the next three years”

    Web 2.0 Expo: From Desktop to Device: Designing the Ubiquitous Mobile Experience

    Basically, the catalyst for this comes down to a combination of increasing network speeds and mobile services, combined with a falling cost in the provision of data services.

    It seems that there are basically two schools of thought when it comes to designing mobile content for the web: some (most notably the W3C) believe that content should be device agnostic; whilst that approach is perfectly laudable (a mobile browser is, after all, just another form of browser) others believe that the whole point of the mobile web is that device-specific functionality can be used to provide services that wouldn’t otherwise be available (e.g. location-based services).

    Brian’s .net magazine article explains that there are for major methods of mobile web publishing:

    1. Small screen rendering
    2. Programatically reformatting content
    3. Handheld style-sheets
    4. Mobile-specific site.

    As we work down the list, each of these methods is (potentially) more complex, but is also faster. Luckily, for WordPress users like myself, Alex King has written a WordPress Mobile Edition plugin, which applies a different stylesheet for mobile browsers, publishing a mobile friendly site. Using the Opera Mini live demo to simulate a mobile browser, this is what it did for my site:

    This website, viewed in a simulated mobile phone browserThe mobile-optimised version of this website, viewed in a simulated mobile phone browser

    The first image shows the content as it would be rendered using the default, small screen rendering – not bad but not exactly ideal on a small screen – but the second image is using the WordPress Mobile Edition plugin to display something more suitable for the mobile web. Not only is the display much simpler and easy to navigate on a handset, but the page size has dropped from 28KB to 1KB. Consequently, I was a bit alarmed when I used the ready.mobi site to generate a report for this site, as the site only scored 3 out of 5 and was labelled as “will possibly display poorly on a mobile phone”. Even so, the user experience on my relatively basic (by modern standards) Nokia 6021 was actually quite good (especially when considering that the device is not a smartphone and it failed the handheld media type test) whereas viewing the normal (non-mobile) version generated a “memory full” error.

    So, it seems that preparing a WordPress site for the mobile web is actually pretty simple. I have a couple of tweaks to make in order to improve the ready.mobi test results (quick fixes ought to include support for access keys and working out why the page heading is being tagged as <h3> when the standard site uses an <h1> tag) but there is certainly no need for me to develop a separate site for mobile devices, which is just as well as it’s taking me ages to finish the redevelopment of the site (and I can save myself a few quid by not registering the markwilson.mobi domain)!

    Links
    The following links may be useful to anyone who is looking at developing content for the mobile web:

    It may also be worth stopping by at Keni Barwick’s blog on all things mobile.

  • Coding horror

    I just stumbled upon Jeff Atwood’s Coding Horror blog and it’s very interesting reading (even for those of us who write very little code). The article that I found was commenting on Jakob Nielsen’s latest tome on web usability. Although Nielsen makes some valid points, the comments are worth a read as they highlight some of the real compromises that website designers and website developers have to make.

    I’m sure I could lose many hours reading Jeff’s writing but they all seem well-informed, to the point and interesting… these were just a few of the posts that grabbed my attention this afternoon:

    • When in doubt, make it public looks at how Web 2.0 is really just creating websites out of old Unix commands and that the new business models are really about taking what was once private and making it public!
    • SEOs: the new pornographers of the web looks at how much of the real search engine optimisation is just good web development and that many of the organisations focusing on SEO are all about money and connections – whether or not the assertions that Jeff makes in his post are correct, it’s an interesting view and certainly seems to have a lot of SEOs fighting their corner.
    • Why does Vista use all my memory? looks at how Windows Vista’s approach to memory management (a feature called SuperFetch) and how grabbing all the available memory to use it as a big cache is not necessarily a bad thing.
  • Microsoft SharePoint products and technologies

    Over the last few years, I’ve had a couple of attempts at learning about the various Microsoft SharePoint products and technologies but I’ve never really had the chance to implement SharePoint for a customer. Recently though, I’ve had the opportunity to get involved with some work around Microsoft Office SharePoint Server (MOSS) 2007 (which replaces SharePoint Portal Server 2003) but unfortunately, much of this work is commercially sensitive so I can’t really write much about it here. One thing I can write about is that in the course of this work I learned that my long-time colleague Andy May has a blog with lots of useful information about SharePoint products and services (I should probably point out that the Andy May that I work with shouldn’t be confused with Andrew May, who works at Microsoft and writes about SharePoint).

    I’ve found Andy’s posts on WSS and MOSS client access licenses and feature differences between [Windows SharePoint Services] WSS and MOSS 2007 particularly useful as I’ve attempted to cut through the differences between WSS and MOSS (which is just as complex as it was with previous versions of WSS/SharePoint Team Services and SharePoint Portal Server). Microsoft has a description of the relationship between the various SharePoint products and technologies but the diagram Andy uses (from Mart Muller) makes it all a little clearer.

  • Showing hidden files in Mac OS X

    I use hidden files (such as .htaccess) extensively on my website, so I needed to be sure that they were included with my local backup copy. Mac OS X doesn’t show hidden files by default (it all gets a bit messy otherwise – although they are visible in a Terminal shell); however I found a tip which details the commands to run in order to show hidden files in the Finder (this can be run using a standard user account):

    defaults write com.apple.finder AppleShowAllFiles TRUE
    killall Finder

    To return to the default display, run:

    defaults write com.apple.finder AppleShowAllFiles FALSE
    killall Finder

    I did find an application to display hidden files too but why bother if a couple of commands will do the trick? Even better, there is a workflow to show hidden files using Automator.

  • Call centres, offshoring, poor customer service and how to save some money

    I can understand why outsourcing and, in today’s global economy, off-shoring is so attractive to companies. I just wish that companies would think things through a bit further than the initial impressions of reduced costs.

    Because I work for a company whose core business is IT managed service (but never referred to as outsourcing!), I’ll steer clear of my feelings on that subject – suffice to say that clearly there is a conflict of interest there. This post is purely about my experiences as a consumer of outsourced and off-shored services.

    Firstly, although I have no direct experience of this, I’m led to believe that the cost savings from moving services overseas are generally not as great as they may initially appear. The workers may be paid less, and the cost of office space in India (for example) may be lower than in Western Europe but in call centre situations there are telecommunications costs to consider, as well as the inevitable integration of the offshore systems and processes with the rest of the business. I frequently have cause to speak to an IT service desk in South Africa and am rarely happy with the outcome. Sure, the staff are friendly and English is well spoken but the telephone line quality is inevitably poor. I don’t know if it’s routed via IP or over low grade international lines but either way it is not good for conversation. Even for software development, there is an implicit need for integration with other products and teams based elsewhere in the world. Technologies such as VoIP and web conferencing services may help bridge the gaps but in my experience, occasional face-to-face meetings are required in order to cement a quality relationship (and good relations really help to get things done).

    Whilst I can see that there are some benefits to be had from off-shoring, near-shoring is sometimes seen as a more practical alternative (certainly, one of my previous employers had a relationship with a company in Eastern Europe for offshore software development rather than further afield). From a business perspective, it’s often a lot easier to spend a day or two in another European city than to fly several timezones away with all the complications that entails (jet lag is really not conducive to efficiency – as I found on a business trip to New Jersey a few years back).

    Getting back to call centres, I’m not always pleased with my bank (First Direct); however one of the main reasons I stick with them is that all of their call centres are UK-based. The staff may have a variety of regional dialects (who doesn’t?) but English is their first language – complete with an understanding of all the nuances and colloquialisms that are in daily use. Even so, a few months back, I wanted to know the telephone number for a branch office where the paying in machine had crashed half way through my transaction (my money had been taken but would it be credited to my account?). This branch actually belongs to First Direct’s parent company, HSBC, who only publish a central number for branch contact. And where do you think the call goes to speak to someone in the next major town? Yep. India. Who couldn’t put me through to anyone local that could give me an answer – all I could do was wait and see if the money appeared in my account (fortunately it did).

    I’ve just come off the phone from my credit card company (Marks and Spencer Money) and the only thing that keeps my custom there is the shopping vouchers that appear in the post every few weeks. All I wanted was a replacement card. After negotiating the inevitable IVR system I finally spoke to someone who was unable to help me because his systems were updating (and could I please call back later). After I said that they should call me (and he said he worked in a paperless office so he couldn’t make a note of my details to call back), I threatened to close my account (I meant it) and he transferred me to a colleague whose system could issue me a replacement card. Result. Except that it shouldn’t be this difficult and I shouldn’t need to be so stroppy.

    To be fair, this could have happened in a UK call centre too. John Lewis Financial Services is based in Birmingham and I gave up getting a satisfactory response from them after problems with their Partnership card. Ditto for Halifax Bank of Scotland, who I hope never to do business with again (although that’s becoming increasingly difficult as they are so large in the marketplace). Even Volkswagen Assistance were unable to renew the breakdown cover on my wife’s car this morning because of a system being unavailable and asked me to call back later, although they were prepared to take a note of my number and call me (if only everything in life was as reliable as a Volkswagen). My point is, that if the call quality is good and the person on the other end of the phone wants to go the extra mile to deliver great customer service then I’ll be happy to continue my relationship and if they don’t, then I won’t.

    In another example, my wife received a letter from her bank to say that as her credit card was coming up for expiry and, as she hadn’t used it for a while, they would close the account if they didn’t hear from her. That’s fair enough – customers who don’t use their accounts are bad business – but when she called to speak to them about keeping the account open (an opportunity to regain some lost business) she found herself on hold for so long that she decided that she didn’t want the account anyway! Similarly, I found myself on hold in Tesco‘s call centre phone system for so long a few weeks back that the cost of my phone call was greater than the cost of the product that I needed a refund for in my online shopping!

    To make matters worse, many call centres only publish national rate (0870) numbers (and other non-geographic numbers that are excluded from bundled/low-cost call deals) – in effect, they can actually make money from you whilst keeping you on the line so, if you want to reduce your phone bill when calling non-geographic numbers for call centres, check out say no to 0870 for a database of alternative numbers).

    In my view, it’s the cost of lost business that companies need to consider when selecting their partners rather than basing decisions on cost reduction alone.

    Right, enough of being Mr. Angry – it’s a beautiful sunny day – I’m going to leave my computers and phone behind and get out into the countryside!

  • SSH addendum

    Since my recent posts about using SSH to securely remote administer Mac OS X, Linux and Windows computers, a couple of extra points have come up that are probably worth noting:

    • To use the console interactively, it may be better to use PuTTY (putty) than PuTTY Link (plink). In seems that PuTTY Link is fine for setting up a tunnel and then connecting using VNC, RDP or another remote control method but I found that control codes were echoed to the console window when I connected to a Linux of Windows computer and the command line experience was generally better using PuTTY interactively. This is because (quoting from the PuTTY documentation) “The output sent by the server will be written straight to your command prompt window, which will most likely not interpret terminal control codes in the way the server expects it to […] Interactive connections like this are not the main point of Plink”.
    • For another method of generating SSH keys, an online SSH key generator is available (I haven’t tried it myself).