Category: Technology

  • Looking forward to Windows Server 2008: Part 1 (Server Core and Windows Server Virtualization)

    Whilst the first two posts that I wrote for this blog were quite generic, discussing such items as web site security for banks and digital rights management, this time I’m going to take a look at the technology itself – including some of the stuff that excites me right now with Microsoft’s Windows Server System.

    Many readers will be familiar with Windows XP or Windows Vista on their desktop but may not be aware that Windows Server operating systems also have a sizable chunk of the small and medium size server market.   This market is set to expand as more enterprises implement virtualisation technologies (running many small servers on one larger system, which may run Windows Server, Linux, or something more specialist like VMware ESX Server).

    Like XP and Vista, Windows 2000 Server and Advanced Server (both now defunct), Windows Server 2003 (and R2) and soon Windows Server 2008 have their roots in Windows NT (which itself has a lot in common with LAN Manager).  This is both a blessing and a curse as while the technology has been around for a few years now and is (by and large) rock solid, the need to retain backwards compatibility can also mean that new products struggle to balance security and reliability with legacy code.

    Microsoft is often criticised for a perceived lack of system stability in Windows but it’s my experience that a well-managed Windows Server is a solid and reliable platform for business applications.  The key is to treat a Windows Server computer as if it were the corporate mainframe rather than adopting a   personal computer mentality for administration.  This means strict policies controlling the application of software updates and application installation as well as consideration as to which services are really required.

    It’s this last point that is most crucial.  By not installing all of the available Windows components and by turning off non-essential services, it’s possible to reduce the attack surface for any would-be hacker.  A reduced attack surface not only means less chance of falling foul of an exploit but it also means less patches to deploy.  It’s with this in mind that Microsoft produced Windows Server Core – an installation option for the forthcoming Windows Server 2008 product (formerly codenamed Longhorn Server).

    As the name suggests, Windows Server Core is a version of Windows with just the core operating system components and a selection of server roles available for installation (e.g. Active Directory domain controller, DHCP server, DNS server, web server, etc.).  Server Core doesn’t have a GUI as such and is entirely managed from a command prompt (or remotely using standard Windows management tools).  Even though some graphical utilities can be launched (like Notepad), there is no Start Menu, no Windows Explorer, no web browser and, crucially, a much smaller system footprint.  The idea is that core infrastructure and application servers can be run on a server core computer, either in branch office locations or within the corporate data centre and managed remotely.  And, because of the reduced footprint, system software updates should be less frequent, resulting in improved server uptime (as well as a lower risk of attack by a would-be hacker).

    If Server Core is not exciting enough, then Windows Server Virtualization should be.  I mentioned virtualisation earlier and it has certainly become a hot topic this year.  For a while now, the market leader (at least in the enterprise space) has been VMware (and, as Tracey Caldwell noted a few weeks ago, VMware shares have been hot property), with their Player, Workstation, Server and ESX Server products.  Microsoft, Citrix (XenSource) and a number of smaller companies have provided some competition but Microsoft will up the ante with Windows Server Virtualization, which is expected to ship within 180 days of Windows Server 2008.  No longer running as a guest on a host operating system (as the current Microsoft Virtual Server 2005 R2 and VMware Server products do), Windows Server Virtualization will directly compete with VMware ESX Server in the enterprise space, with a totally new architecture including a thin “hypervisor” layer facilitating direct access to virtualisation technology-enabled hardware and allowing near-native performance for many virtual machines on a single physical server.  Whilst Microsoft is targeting the server market with this product (they do not plan to include the features that would be required for a virtual desktop infrastructure, such as USB device support and sound capabilities) it will finally establish Microsoft as a serious player in the virtualisation space (even as the market leader within a couple of years).  Furthermore, Windows Server Virtualization will be available as a supported role on Windows Server Core; allowing for virtual machines to be run on an extremely reliable and secure platform.  From a management perspective there will be a new System Center product – Virtual Machine Manager, allowing for management of virtual machines across a number of Windows servers, including quick migration, templated VM deployment and conversion from physical and other virtual machine formats.

    Windows Server Core and Windows Server Virtualization are just two of the major improvements in Windows Server 2008.  Over the coming weeks, I’ll be writing about some of the other new features that can be expected with this major new release.

    Windows Server 2008 will be launched on 27 February 2008.  It seems unlikely that it will be available for purchase in stores at that time; however corporate users with volume license agreements should have access to the final code by then.  In the meantime, it’s worth checking out Microsoft’s Windows Server 2008 website and the Windows Server UK User Group.

    [This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

  • A call for open standards in digital rights management

    Digital rights management (DRM) is a big issue right now. Content creators have a natural desire to protect their intellectual property and consumers want easy access to music, video, and other online content.

    The most popular portable media player is the Apple iPod, by far the most successful digital music device to date. Although an iPod can play ordinary MP3 files, its success is closely linked to iTunes’ ease of use. iTunes is a closed system built around an online store with (mostly) DRM-protected tracks using a system called FairPlay that is only compatible with the iTunes player or with an iPod.

    Another option is to use a device that carries the PlaysForSure logo. These devices use a different DRM scheme – Windows Media – this time backed by Microsoft and its partners. Somewhat bizarrely, Microsoft has also launched its own Zune player using another version of Windows Media DRM – one that’s incompatible with PlaysForSure.

    There is a third way to access digital media – users can download or otherwise obtain DRM-free tracks and play them on any player that supports their chosen file format. To many, that sounds chaotic. Letting people download content without the protection of DRM! Surely piracy will rule and the copyright holders will lose revenue.

    But will they? Home taping has been commonplace for years but there was always a quality issue. Once the development of digital music technologies allowed perfect copies to be made at home the record companies hid behind non-standard copy prevention schemes (culminating in the Sony rootkit fiasco) and DRM-protected online music. Now video content creators are following suit, with the BBC and Channel 4 both releasing DRM-protected content that will only play on some Windows PCs. At least the BBC does eventually plan to release a system that is compatible with Windows Vista and Macintosh computers but for now, the iPlayer and 4 on Demand are for Windows XP users only.

    It needn’t be this way as incompatible DRM schemes restrict consumer choice and are totally unnecessary. Independent artists have already proved the model can work by releasing tracks without DRM. And after the Apple CEO, Steve Jobs, published his Thoughts on Music article in February 2006, EMI made its catalogue available, DRM-free, via iTunes, for a 25% premium.

    I suspect that the rest of the major record companies are waiting to see what happens to EMI’s sales and whether there is a rise in piracy of EMI tracks; which in my opinion is unlikely. The record companies want to see a return to the 1990s boom in CD sales but that was an artificial phenomenon as music lovers re-purchased their favourite analogue (LP) records in a digital (Compact Disc) format. The way to increase music sales now is to remove the barriers online content purchase.

    • The first of these is cost. Most people seem happy to pay under a pound for a track but expect album prices to be lower (matching the CDs that can be bought in supermarkets and elsewhere for around £9). Interestingly though, there is anecdotal evidence that if the price of a download was reduced and set at around $0.25 (instead of the current $0.99), then people would actually download more songs and the record companies would make more money.
    • Another barrier to sales is ease of use and portability. If I buy a CD (still the benchmark for music sales today), then I only buy it once regardless of the brand of player that I use. Similarly, if I buy digital music or video from one store why should I have to buy it again if I change to another system?

    One of the reasons that iTunes is so popular is that it’s very easy to use – the purchase process is streamlined and the synchronisation is seamless. It also locks consumers into one platform and restricts choice. Microsoft’s DRM schemes do the same. And obtaining pirated content on the Internet requires a level of technical knowledge not possessed by many.

    If an open standard for DRM could be created, compatible with both FairPlay and Windows Media (PlaysForSure and Zune), it would allow content owners to retain control over their intellectual property without restricting consumer choice.

    [This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

  • Security – Why the banks just don’t get IT

    A few weeks back, I read a column in the IT trade press about my bank’s botched attempt to upgrade their website security and I realised that it’s not just me who thinks banks have got it all wrong…

    You see, the banks are caught in a dilemma between providing convenient access for their customers and keeping it secure. That sounds reasonable enough until you consider that most casual Internet users are not too hot on security and so the banks have to dumb it down a bit.

    Frankly, it amazes me that information like my mother’s maiden name, my date of birth, and the town where I was born are used for “security” – they are all publicly available details and if someone wanted to spoof my identity it would be pretty easy to get hold of them all!

    But my bank is not alone in overdressing their (rather basic) security – one of their competitors recently “made some enhancements to [their] login process, ensuring [my] money is even safer”, resulting in what I can only describe as an unmitigated user experience nightmare.

    First I have to remember a customer number (which can at least be stored in a cookie – not advisable on a shared-user PC) and, bizarrely, my last name (in case the customer number doesn’t uniquely identify me?). After supplying those details correctly, I’m presented with a screen similar to the one shown below:

    Screenshot of ING Direct login screen

    So what’s wrong with that? Well, for starters, I haven’t a clue what the last three digits of my oldest open account are so that anti-phishing question doesn’t work. Then, to avoid keystroke loggers, I have to click on the key pad buttons to enter the PIN and memorable date. That would be fair enough except that they are not in a logical order and they move around at every attempt to log in. This is more like an IQ test than a security screen (although the bank describes it as “simple”)!

    I could continue with the anecdotal user experience disasters but I think I’ve probably got my point across by now. Paradoxically, the answer is quite simple and in daily use by many commercial organisations. Whilst banks are sticking with single factor (something you know) login credentials for their customers, companies often use multiple factor authentication for secure remote access by employees. I have a login ID and a token which generates a seemingly random (actually highly mathematical) 6 digit number that I combine with a PIN to access my company network. It’s easy and all it needs is knowledge of the website URL, my login ID and PIN (things that I know), together with physical access to my security token (something I have). For me, those things are easy to remember but for someone else to guess – practically impossible.

    I suspect the reason that the banks have stuck with their security theatre is down to cost. So, would someone please remind me, how many billions did the UK high-street banks make in profit last year? And how much money is lost in identity theft every day? A few pounds for a token doesn’t seem too expensive to me. Failing that, why not make card readers a condition of access to online banking and use the Chip and PIN system with our bank cards?

    [This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

  • Removing duplicate search engine content using robots.txt

    Here’s something that no webmaster wants to see:

    Screenshot showing that Google cannot access the homepage due to a robots.txt restriction

    It’s part of a screenshot from the Google Webmaster Tools that says “[Google] can’t current access your home page because of a robots.txt restriction”. Arghh!

    This came about because, a couple of nights back, I made some changes to the website in order to remove the duplicate content in Google. Google (and other search engines) don’t like duplicate content, so by removing the archive pages, categories, feeds, etc. from their indexes, I ought to be able to reduce the overall number of pages from this site that are listed and at the same time increase the quality of the results (and hopefully my position in the index). Ideally, I can direct the major search engines to only index the home page and individual item pages.

    I based my changes on some information on the web that caused me a few issues – so this is what I did and by following these notes, hopefully others won’t repeat my mistakes; however, there is a caveat – use this advice with care – I’m not responsible for other people’s sites dropping out of the Google index (or other such catastrophes).

    Firstly, I made some changes to the section in my WordPress template:







    Because WordPress content is generated dynamically, this tells the search engines which pages should be in, and which should be out, based on the type of page. So, basically, if this is an post page, another single page, or the home page then go for it; otherwise follow the appropriate rule for Google, MSN or other spiders (Yahoo! and Ask will follow the standard robots directive) telling them not to index or archive the page but to follow any links and additionally, for Google not to include any open directory information. This was based on advice from askapache.com but amended because the default indexing behaviour for spiders is to index, follow or all so I didn’t need to specify specific rules for Google and MSN as in the original example (but did need something there otherwise the logic reads “if condition is met donothing else dosomething” and the donothing could be problematic) .

    Next, following fiLi’s advice for using robots.txt to avoid content duplication, I started to edit my robots.txt file. I won’t list the file contents here – suffice to say that the final result is visible on my web server and for those who think that publishing the location of robots.txt is a bad idea (because the contents are effectively a list of places that I don’t want people to go to), then think of it this way: robots.txt is a standard file on many web servers, which by necessity needs to be readable and therefore should not be used for security purposes – that’s what file permissions are for (one useful analogy refers to robots.txt as a “no entry” sign – not a locked door)!

    The main changes that I made were to block certain folders:

    Disallow: /blog/page
    Disallow: /blog/tags
    Disallow: /blog/wp-admin
    Disallow: /blog/wp-content
    Disallow: /blog/wp-includes
    Disallow: /*/feed
    Disallow: /*/trackback

    (the trailing slash is significant – if it is missing then the directory itself is blocked, but if it is present then only the files within the directory are affected, including subdirectories).

    I also blocked certain file extensions:

    Disallow: /*.css$
    Disallow: /*.html$
    Disallow: /*.js$
    Disallow: /*.ico$
    Disallow: /*.opml$
    Disallow: /*.php$
    Disallow: /*.shtml$
    Disallow: /*.xml$

    Then, I blocked URLs that include ? except those that end with ?:

    Allow: /*?$
    Disallow: /*?

    The problem at the head of this post came about because I blocked all .php files using

    Disallow: /*.php$

    As https://www.markwilson.co.uk/blog/ is equivalent to https://www.markwilson.co.uk/blog/index.php then I was effectively stopping spiders from accessing the home page. I’m not sure how to get around that as both URLs are serving the same content, but in a site of about 1500 URLs at the time of writing, I’m not particularly worried about a single duplicate instance (although I would like to know how to work around the issue). I resolved this by explicitly allowing access to index.php (and another important file – sitemaps.xml) using:

    Allow: /blog/index.php
    Allow: /sitemap.xml

    It’s also worth noting that neither wildcards (*, ?) nor allow are valid robots.txt directives and so the file will fail validation. After a bit of research I found that the major search engines have each added support for their own enhancements to the robots.txt specification:

    • Google (Googlebot), Yahoo! (Slurp) and Ask (Teoma) support allow directives.
    • Googlebot, MSNbot and Slurp support wildcards.
    • Teoma, MSNbot and Slurp support crawl delays.

    For that reason, I created multiple code blocks – one for each of the major search engines and a catch-all for other spiders, so the basic structure is:

    # Google
    User-agent: Googlebot
    # Add directives below here

    # MSN
    User-agent: msnbot
    # Add directives below here

    # Yahoo!
    User-agent: Slurp
    # Add directives below here

    # Ask
    User-agent: Teoma
    # Add directives below here

    # Catch-all for other agents
    User-agent: *
    # Add directives below here

    Just for good measure, I added a couple more directives for the Alexa archiver (do not archive the site) and Google AdSense (read everything to determine what my site is about and work out which ads to serve).

    # Alexa archiver
    User-agent: ia_archiver
    Disallow: /

    # Google AdSense
    User-agent: Mediapartners-Google*
    Disallow:
    Allow: /*

    Finally, I discovered that Google, Yahoo!, Ask and Microsoft now all support sitemap autodiscovery via robots.txt:

    Sitemap: http://www.markwilson.co.uk/sitemap.xml

    This can be placed anywhere in the file, although Microsoft don’t actually do anything with it yet!

    Having learned from my initial experiences of locking Googlebot out of the site, I checked the file using the Google robots.txt analysis tool and found that Googlebot was ignoring the directives under User-agent: * (no matter whether that section was first or last in the file). Thankfully, posts to the help groups for crawling, indexing and ranking and Google webmaster tools indicated that Googlebot will ignore generic settings if there is a specific section for User-agent: Googlebot. The workaround is to include all of the generic exclusions in each of the agent-specific sections – not exactly elegant but workable.

    I have to wait now for Google to re-read my robots.txt file, after which it will be able to access the updated sitemap.xml file which reflects the exclusions. Shortly afterwards, I should start to see the relevance of the site:www.markwilson.co.uk results improve and hopefully soon after that my PageRank will reach the elusive 6.

    Links

    Google webmaster help center.
    Yahoo! search resources for webmasters (Yahoo! Slurp).
    About Ask.com: Webmasters.
    Windows Live Search site owner help: guidelines for succesful indexing and controlling which pages are indexed.
    The web robots pages.

  • Recycling old CDs/DVDs and tapes

    Recycle now(Recycle Now is a UK website – United States readers can find out more about acting in an environmentally-friendly manner at Earth 911)

    Leo Hickman: A Good Life - the Guide to Ethical LivingRecently, there have been a few posts on this site that aim to encourage reuse and recycling within our industry. It’s not that I’m going “green” – I’ve actually been interested in sustainability for a while now (and I recommend reading A Good Life – the guide to ethical living, by Leon Hickman). It’s not even about climate change – maybe mankind is warming up the planet with possible catastrophic consequences but maybe it’s just nature. One thing it’s almost impossible to argue with is that the world’s resources are finite and my personal belief is that we all have a duty to use those resources in the best way possible (well, ideally not to use them at all… but you get my drift).

    Oh sure, I’m no saint (my family uses two cars and my kids use disposable nappies for starters) but there are things that we can all do and, whilst I don’t know the situation for any overseas visitors to this blog, here in the UK, the Government does little except pay lip service to environmental issues (and don’t get me started on wind “farms”).

    Anyway, as part of my ongoing clearout of “stuff” from my loft/garage/office, I was about to throw away something in the region of 300 CD/DVDs (not music ones, just old software betas and demos, Microsoft TechNet CDs, etc.) and thought “these must be recyclable”. Well, it turns out they are (the United States Environmental Protection Agency has also produced a poster which demonstrates the life of a CD or DVD, from initial materials acquisition, through to reuse, recycling or disposal) and, even though my local authority only recycles the most basic of items, I found out about The Laundry (and it seems that they will accept CDs, DVDs, CD-Rs, VHS and compact cassettes for recycling from anyone, including the jewel cases and inlays).

    The Laundry collectionThe Laundry is an excellent idea, because for as much as consumers control the amount of landfill waste that we produce, our hands are largely tied by local authorities that don’t recycle all items that it is possible to recycle (for example, mine won’t accept certain types of plastics, aerosols, Tetra Paks, etc. for recycling – even though the links I just provided prove that they can be recycled), meaning that consumers have to find another outlet for this type of waste (and most won’t bother). Businesses have an even bigger problem as many local authorities won’t accept anything but landfill waste from them, so they end up throwing away recyclable items (and paying for the privilege). By providing a weekly kerbside collection service for small businesses in London, The Laundry has the potential to really make a difference – I’d love to see them expand nationwide.

    The next time I’m in East London, I’ll be dropping off my old CDs and DVDs at The Laundry. As for the packaging, there’s an amusing discussion on what to do with the plastic spindles that blank CD-Rs are sold on at the How can I recycle this? site.

  • Opening up the Mac Mini – easy when you know how

    Woohoo! 2GB RAM in my Mac for less than half the amount that Apple would have charged me (does anybody want to buy 2x256MB 667MHz DDR2 SODIMMs that have been used for just one month?).

    Mac with 2GB RAM installed

    Ordinarily, I’d say that upgrading the RAM in a PC is no big deal, but Mac Minis don’t have any screws to open the case; and unlike many notebook PCs, it not a case of popping open a small panel either.

    Thankfully the instructional videos at the OtherWorldComputing Tech Center include a hardware upgrade tutorial for the Intel Mac Mini which showed exactly how to do it (thanks guys – if you sold memory in the UK I would have bought it from you).

    So, armed with a Stanley DynaGrip 50mm filling knife that I picked from B&Q on the way home and an old plastic visitors pass (from Microsoft of all places!), I gained access to the inside of my Mac and swapped out the standard 256MB SODIMMs for two new 1GB modules from(which arrived in 24 hours with free shipping by Royal Mail Special Delivery - and there was 5% off the day I bought them, so they only came to £178.59 including VAT). The operation wasn't without it's hiccups. First of all, I didn't quite insert one of the memory modules correctly so when I booted the Mac it only saw 1GB of RAM. Then, when I reopened the computer to investigate, the knife slipped and I made a small scratch on the outside of the case (annoying, but too late to do anything about it now). I refitted the RAM, but dropped one of the screws inside the unit and the airport antenna came off whilst I was trying to locate the missing screw... that was a bit of a heart-stopper but it was easily reattached (once I worked out where to fix it). Finally, I forgot to reattach the small cable at the front of the motherboard so the fan ran continuously until I opened the Mac up for a third time and reattached the missing connector. Notwithstanding all of these errors, everything is working now and the extra memory should make everything a lot faster.

  • Warning – buy your upgrades when you buy your Mac

    A few weeks back, I bought a Mac Mini. Because I wanted it shipped immediately (and because the upgrade prices sounded a bit steep), I stuck with the standard 80GB hard disk and 512MB of RAM and now I’m finding performance to be a little sluggish – I suspect that’s due to a lack of memory.

    When I ordered the Mac, the cost of specifying 2x1GB 667MHz DDR2 SDRAM SODIMMs instead of 2x256MB was £210.01. Likewise, to take the SATA hard disk from 80GB to 120GB would cost £89.99. Those are (very) high prices for standard PC components but nothing compared to the quote I just had from the Apple Store for 2GB of RAM (with “free” installation) – over £420! Mac:Upgrades can do a similar deal (but not while I wait) for around £325 but when I look at the memory prices using the Crucial Memory Advisor Tool to I get two options that will work for me, each at a much lower price:

    1. I could drop one of my 256MB SODIMMS and replace it with a 1GB module, giving me a total of 1.25GB for just £98.69.
    2. Alternatively, I could take out all of the existing memory and add a 2GB kit (2x1GB of matched memory) for £186.81.

    …so, I guess there will be bits of MacIntel all over my desk in a few days time…

    Crucial recommend the matched pair option for reasons of performance (Apple say it allows memory interleaving), and if I’m going to open up my Mac (which looks to be a delicate operation) then I’d rather only do it once – that means option 2, which is only a few pounds less than the original upgrade would have been (although I will have 512MB of spare memory afterwards).

    In all, for the sake of my warranty (and sanity), it looks as if the best option would have been to specify extra RAM at the time of purchase, but I guess if I do wreck the machine in the process of upgrading, the cost of replacing it is not much more than Apple would charge me for 2GB of RAM!

    Rumour has it that the new Intel Core 2 Duo processors are socket compatible with my Core Duo (and quad core chips should be available by the end of the year) so a return to the operating table for a processor upgrade is a distinct possibility for the future.

  • The week when my digital life was on hold

    Last week I wrote about the arrival of my new Mac Mini, along with claims that “[my] digital life starts here”. Thankfully, unlike a chunk of my computing resource, my physical life doesn’t rely on Apple Support.

    I was experiencing problems maintaining a steady Ethernet connection, initially whilst downloading OS X updates from Apple and then whilst copying data from a Windows XP PC. After a random time the connection would drop, with receive errors shown in the OS X Network Utility. The only way to break this cycle was to restart the computer after which time the network was once again available.

    I spent almost two hours on the phone to Apple support staff, who were generally helpful, but seemed to be relying on scripted support sequences and an internal knowledge base. It seemed that all Apple really wanted to do was rule out the Apple hardware and point the blame on something else on the network. Sure enough, I couldn’t replicate the problem on a direct crossover cable (100Mbps full duplex), or via a 10Mbps half duplex hub, 100Mbps full duplex switch – just via a 100Mbps half duplex hub but crucially, the other devices on the network were all able to communicate with each other via the same hub with no errors at all. Only the Mac had a problem.

    I finally snapped and said I wanted to return my shiny aluminium paperweight when the support analyst suggested I checked the firewall settings on the PC from where I was trying to copy data (I pointed out that if there was a firewall issue then no data at all would be copied – not several hundred megabytes before crashing and in any case the problem existed downloading updates from Apple’s website too).

    After being advised to take my Mac to a hardware specialist 30 miles away (to see if there were any problems communicating with another Mac), I decided to rebuild it from the operating system install disks. The 14 Mac updates that took so long to install before (now 13 as one was a permanent BIOS update) were applied with just one error. It seemed that the problem was with the operating system as installed in the factory (presumably not a DVD installation, but performed using disk duplication software). Unfortunately, although it seems to take a lot longer before crashing now, the problem is still there when I connect via the hub, so I’ve added a switch just for the Mac (everything else is as it was before).

    One thing I should say is that the guys who responded to my call for help on the Apple discussion forums were really helpful (I guess switching from Windows to OS X is something which Mac users would like to encourage).

    So, now I’m up and running and my digital life can start. Just as well, because my new Fujitsu-Siemens S20-1W monitor turned up yesterday – 20.1″ of widescreen vision, at a resolution of 1680×1050, in a brushed aluminium case (no plastic here) and almost £200 less expensive than the Apple equivalent (I got it from Dabs.com for £365).

    Fujitsu-Siemens S20-1W

  • Configuring and troubleshooting services and applications with Proxy Server 2.0

    I’ve spent a lot of time over the last few days struggling to configure a Microsoft Proxy Server 2.0 running on Windows NT 4.0 and Internet Information Server (IIS) 3.0 to reverse proxy (i.e. publish) an HTTPS website. Eventually I had to admit defeat (I’m trying to convince my client to upgrade to ISA Server 2004); however I did find a useful resource for Proxy Server 2.0 information that should be worth a look next time I’m trying to administer/troubleshoot a Microsoft Proxy Server configuration.

  • Watch out for long path names on an NTFS volume

    I came across an interesting issue earlier today. Somehow, on my NTFS-formatted external hard disk I had managed to create a file system structure which was too deep.

    Whenever I tried to delete a particular folder tree, I received strange errors about files which couldn’t be deleted:

    Error Deleting File or Folder

    Cannot delete foldername: The file name you specified is not valid or too long. Specify a different file name.

    I thought that was strange – after all, I’d managed to create the files in the first place, then I found that if I drilled down to the files that would not delete, there was no right-click option to delete the file. Finally, I found that some folders displayed the following error when I tried to access them:

    Can’t access this folder.

    Path is too long.

    It turned out that the problem folders/files had path names in excess of 255 characters. By renaming some of the top level folders to single character folder names (thus reducing the length of the path), I was able to access the problem files and folders, including deleting the files that I wanted to remove.