Child safety online

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In previous posts I’ve mentioned both the Microsoft at work and Microsoft at home microsites. Today I was directed to a new microsite – security at home, specifically the child safety online section.

I’m not sure if the world is really any more dangerous for children than it was when I was a child but I do know the media is all pervasive – we hear a lot more about the unfortunate events that do occur – and that as a parent I’ll do anything I can to ensure that my son is safe. At three months old, he’s a bit young to be using the Internet but this site looks a useful resource for anyone who has children aged between 2 to 17 and who use a computer with a connection to the Internet.

On a related note, a couple of week’s back I wrote about technology’s role in the demise of the English language. Well, for anyone (like me), who’s not as “with it” as we once were (omigod, and I’m only 32 – hellllllp!), whilst reading child safety online, I stumbled across a parent’s primer to computer slang (should that be $14NG?) and the netiquette 101 for new netizens.

!337$p34k 1z m4d

Microsoft Exchange Server 2003 troubleshooting and disaster recovery

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of nights back, I attended one of the Microsoft TechNet UK events. Not sure whether to attend John Howard’s session on automating Windows Server Administration or Eileen Brown’s Microsoft Exchange Server 2003 Troubleshooting and Disaster Recovery session, I decided to return to my Microsoft Exchange roots (which go back to the Exchange Server 4.0 launch events in April 1996). In addition, Eileen has posted some useful links relating to the session content on her blog.

Configuring recovery options and general troubleshooting tools

A common issues for an e-mail administrator is the recovery of e-mail which a user has accidentally deleted, and less commonly, the need to recover a deleted mailbox. Fortunately, Exchange has a number of options which can assist with this. There is a trade off between giving users the opportunity to recover data and additional storage on the e-mail server, but because of single instance storage, in reality this is not as big an issue as it might seem.

For a mailbox store, there are three main configuration items of interest:

  • Keep deleted items for (days) defines the time for which a deleted item is still recoverable from within Outlook, even if the deleted items folder has been emptied (for options in the use of this feature, see KC Lemson’s blog).
  • Keep deleted mailboxes for (days) is similar, but defines the number of days that a mailbox will remain (orphaned and available for reconnection to an Active Directory user account), until it is finally removed from the store.
  • Do not permanently delete mailboxes and items until the store has been backed up ensures that regardless of the deleted item retention and mailbox retention intervals, nothing is finally removed until a full backup of the store has successfully taken place.

For a public folder store, keep deleted items for (days) and do not permanently delete items until the store has been backed up are the equivalent options.

All of these options are found on the limits page of the store properties.

There are a number of troubleshooting tools available to the Exchange administrator:

  • Disaster recovery setup mode (note that this doesn’t work on a cluster), can be used to reconnect mailboxes with a store.
  • The application log in Event Viewer is useful (in particular, watch out fr 1012, 1018, 1019 and 1020 errors – the Exchange Information Store often gives information about database issues well in advance of failure).
  • Diagnostics and protocol logging, which is an overhead on the server and should only be enabled when diagnosing an issue, allows the level of logging to be tuned. Even set to none, critical events are logged, with minimum, medium and maximum corresponding to the level of events that are logged.
  • Message tracking can be used to track messages through the system, optionally recording the subject of the message in the logs. The main consideration with this is the number of days for which a message should be retained.
  • Exchange System Manager’s monitoring and status tools allow monitoring of services or resources (e.g. thresholds for queue length growth) and alert notification, sending e-mail and/or executing a script (a useful alternative if e-mail is unavailable!) to notify an administrator of issues or even to automatically take corrective action.

When backing up Exchange (whether using the Backup Utility for Windows, or a third party tool), there are a number of issues to consider:

  • A backup is not complete unless it includes the mailbox and public folder stores, with the transaction logs and the Windows system state. In addition, mailbox servers should never have circular logging enabled as storage prices have dropped considerably since the days of Exchange Server 4.0 and without a complete set of logs, recovery would still result in some lost data. For front end servers, Microsoft recommend that there are no stores present.
  • If co-existing with Exchange Server 5.5, the Site Replication Service (SRS) also needs to be considered.
  • Connectors may also contain information to be backed up.
  • Recovery will typically take twice a long as backup, so keep backup times short. If quotas are used to limit mailbox usage beware as they might just lead to users storing mail in personal folder (.PST) files, leading to unmanaged offline storage (i.e. not backed up), a loss of singe instance storage, and possibly network bandwidth issues if the personal folders are stored on a network server.

Further information (and best practice) is contained in the Disaster Recovery Operations Guide for Exchange Server 2003.

Troubleshooting Internet E-mail

One of the things to remember when troubleshooting Exchange issues is that there are so many external factors to consider. Besides the obvious areas of Exchange and the e-mail client (usually Outlook), DNS and the underlying network can create issues.

When examining the process of receiving inbound e-mail, Exchange doesn’t actually do much! The originating server looks up the IP address which corresponds to the mail exchanger (MX) record for the SMTP domain in DNS. The message is then routed across the Internet based on that TCP/IP address and it is only once the message has been received (possibly via a smart host) that Exchange routes the message within the organisation for final message delivery. For outbound e-mail, it is the reverse process and from this we can tell that the two main areas to look at are DNS and TCP/IP.

The TCP/IP troubleshooting process is well known:

  1. Check the TCP/IP properties for the Exchange server. Are they complete?
  2. Can you ping localhost ( If not, there would appear to be an issue with the network card or protocol stack.
  3. Next, can you ping the server by its own IP address? If not, there would appear to be an issue with the server’s TCP/IP address – is that configured correctly?
  4. Next, can you ping the default gateway? If not, there would appear to be either an incorrectly configured router (default gateway) address, or a physical network issue (is the cable plugged in?)
  5. Finally, can you ping other hosts on the network – e.g. the DNS server? If not, there may be a routing issue (or the DNS server addresses could be incorrect).

Additional troubleshooting steps for mail servers are:

  • Can you connect to port 25 on the mail server using Telnet? If not, then the SMTP service may not be running.
  • Are the server’s host (A) and MX records correctly recorded in both the internal and external DNS, with the correct priority (1 is the lowest cost).

Other areas to examine are:

  • Is a DNS suffix required and/or set in the TCP/IP properties?
  • Does the computer name have the correct fully qualified domain name (FQDN) in the system properties.
  • Is DNS working correctly (NETDIAG is a useful command, in particular netdiag /test:dns can be used to identify DNS issues, after which NSLOOKUP can be used to query DNS).
  • Address spaces, e.g. if an organisation hosts two or more domain names, are they all configured with MX records and do users have corresponding e-mail addresses.
  • Size restrictions – both internal and external restrictions can be set. If large messages are not being received, this could be the issue. Note that SMTP virtual server settings can be overridden by global settings.
  • If the SMTP queues have a lot of retries pending, this will often indicate a DNS issue.

Recovering messages and mailboxes
There are a number of mailbox recovery tools available to an Exchange administrator:

  • Once an Active Directory account is deleted or a mailbox removed in Exchange System Manager, the mailbox is not actually removed, but is tombstoned (shown with a red cross in System Manager and the retention time for deleted mailboxes begins). This action is carried out by the Cleanup Agent, which may be triggered manually if it has not completed its next scheduled run before a mailbox needs to be recovered.
  • The mailbox recovery center allows an administrator to mount a (recovered) store and view all the disconnected mailboxes, from where a matching user account can be found using the Exchange Mailbox Matching Wizard and the mailbox reconnected using the Mailbox Reconnection Wizard.
  • The Exchange Server Mailbox Merge Wizard (ExMerge) can be used to merge data into a mailbox; however the recover mailbox data functionality (new with Exchange Server 2003 SP1) replaces the need to use ExMerge in the majority of recovery cases.
  • Offline folder (.OST) to personal folder (.PST) conversion has now been superseded by the recovery storage group.

Tip: “object not found” errors when Outlook synchronises with the Exchange Server are often caused by invalid entries in the default address book. Rebuilding this will usually resolve such issues.

Recovery storage groups

To use the recovery storage group feature, at least one Exchange Server 2003 server must be available within the organisation. This allows an administrator to create a recovery storage group, into which a database can be mounted for mailbox recovery, avoiding the need to recover on a separate recovery server and export e-mail via a .PST file; however recovery storage groups do have some limitations (after all, they are intended to be used purely for the purposes of recovering data):

  • All protocols except MAPI (required for the Microsoft Exchange Information Store service to access the storage group) are disabled.
  • Mailboxes cannot be directly connected to user accounts (except using ExMerge).
  • No management policies are available (not necessary as no live users).
  • No Exchange maintenance procedures are available (ESEUTIL/ISINTEG).
  • Databases must be mounted manually (e.g. to run ExMerge).
  • Database locations cannot be changed (but database files are not server/location specific and can be copied manually).
  • Only private mailbox stores can be recovered (i.e. not public folder stores)

Exchange Server 2003-aware backup programs will automatically restore to a recovery storage group.

In a disaster recovery scenario, an Exchange administrator could perform what is known as a dial-tone database restoration. This involves creating an empty database and mounting this so that users can continue to send and receive e-mail whilst their original data is recovered. Meanwhile, the failed database can be restored to a recovery storage group and the recover mailbox data feature or ExMerge used to restore the data to the user’s mailbox whilst both stores are online. To save time in the recovery (albeit involving some more user downtime whilst the databases are swapped), it may be appropriate to swap the database files, remount the original store and then merge in the new data from the dial-tone database. Eileen Brown’s blog features a blogcast demonstrating the recovery storage group which explains the process in further detail.

Further information on using Exchange Server 2003 recovery storage groups is available on the Microsoft website.

Database corruption and recovery

Each Exchange Server storage group has its own set of transaction logs, which can be replayed to recover data up to the point at which failure occurred. In my recent post about Exchange Server best practice and preventative maintenance, I wrote about the ESEUTIL and ISINTEG tools. Using ESEUTIL, it is possible to examine the message headers (eseutil /mh database.edb) and examine the resulting output to check the state of the store (clean or dirty) and whether or not any logs are required to be replayed.

To replay the logs, simply ensure that they are available in the correct location and mount the store. Following this, the application event log should record a number of events indicating that it is initiating recovery steps and replaying the logs before recording a successful completion.

Why IE 7.0 must rely on XP SP2

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve seen a lot of press coverage over the last week or so about Microsoft’s plans for Internet Explorer (IE) 7.0. One of the major gripes seems to be that it will require Windows XP service pack 2 (SP2).

So what’s wrong with that?

One of the main reasons that people are moving to other browsers (e.g. Firefox) is that IE is perceived as insecure. SP2 is a major security update for the Windows XP desktop operating system. Why provide a new (more secure) browser product to people who do not use the latest security patches on their operating system?

SP2 has been publicly available since August 2004 (6 months ago). The temporary blocking mechanism to hold back automatic SP2 deployment from Windows Update is scheduled to expire on April 12 2005. There is no point in IT Managers burying their heads in the sand and ignoring SP2 any longer. I will concede that Microsoft should have shipped v4.0 of the Application Compatibility Toolkit alongside SP2 (after all, application compatibility is probably the largest barrier to SP2 deployment) but it amazes me that so few organisations have made the move to SP2 after all this time.

For those who are not even using Windows XP, whilst the extra functionality in IE 7.0 may be useful, Microsoft is a product and technology business and it needs to maintain its licensing revenues through getting people to adopt the latest technologies (especially whilst strategic products are being delayed by major security rewrites).

If an older platform is seen “good enough” then fine; but “good enough” shouldn’t just be about functionality – it needs to consider the whole picture – including security. It may be that the risk assessment considers remaining on a legacy (possibly unsupported) platform is more favourable than the risk (and cost) of upgrading. That’s fine too – as long as that risk is acceptable to the business.

My recommendation? Organisations who are using Windows XP should fully test their applications and carry out a controlled upgrade to SP2 as soon as possible. Those who continue to use older operating systems (especially Windows 9x, ME, and NT) should urgently consider upgrading. Then keep patch levels up-to-date, for example, by using Microsoft Software Update Services (SUS) and the Microsoft Baseline Security Analyzer (MBSA). IT users can’t continue to complain about the security of the Microsoft platform if they won’t deploy the latest (or even recent) patches.

Direct access to virtual floppy disk files

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m sure that, from time to time, many Virtual PC users would find the ability to mount a virtual floppy disk (.VFD) file and directly access its contents (create, view, edit, rename or delete files, format the disk, or launch a program) useful. Well you can – using Ken Kato’s virtual floppy driver. Ken also has a whole load of tools for VMware users on his website.

A new use for virtualisation?

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The running of legacy applications that cannot be ported to a modern operating system, in their own legacy environment, inside a virtual machine on a modern host operating system is often cited as one of the advantages of virtualisation.

I have a slightly different issue. As a consultant, I spend as much of my time with clients as possible and visits to the office are rare. I’ve also had a variety of hardware issues of late and frequently rebuild the notebook PC that I use for work. I use my own SUS server for software updates but need my PC to be a member of our company’s Active Directory to access my corporate e-mail using Outlook 2003’s HTTP over RPC functionality. Being away from the office means that until an administrator can join my rebuilt PC to the domain, I’m restricted to using Outlook Web Access (as good as the latest version is, it’s still no substitute for the real thing).

It seems that virtualisation can provide the answer to my conundrum. Earlier today, the internal support guys added one of my virtual machines to the corporate domain. Now I can access my corporate applications, run the company’s preferred security products, and generally be a good corporate citizen inside Microsoft Virtual PC and do what I like with my host operating system. I can rebuild my PC at will, then simply reinstall Virtual PC (or Virtual Server), fire up my corporate virtual machine and carry on working.

Windows server system service overview and network port requirements

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As security becomes ever more paramount and network administrators implement extra layers of security, including client PCs running personal firewall products, systems administrators and support staff need to know which ports and protocols Microsoft operating systems and programs require for network connectivity in a segmented network.

Microsoft have addressed this with Microsoft knowledge base article 832017, which details the essential network ports, protocols and services that are used by Microsoft client and server operating systems, server-based programs and their subcomponents in the Microsoft Windows server system.

To blog, or not to blog…

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Self publication on the Internet has existed in various forms for many years, initially via newsgroups and then through the world wide web; however it is the rise of (we)blogging that has taken this to new levels, as software provides two key features:

  • Automatic page generation (using blog engines such as Blogger, TypePad or .Text).
  • Syndication, allowing aggregators to republish content and readers to keep track through online or offline blog clients.

In his recent article, harness the power of blogs, which appeared in IT Week, Tim Anderson comments that:

    “[Companies] have picked up the potential of the medium, and use it to their advantage, encouraging staff to blog. This does not come so easily to companies that have a culture of secrecy. But frankly, maintaining secrecy in the blog era is nearly impossible.”

At Conchango, many of our consultants have maintained their own blogs for some time now. Recognising the potential in the technology, company-sponsored blogging has recently been launched via the Conchango blogging community and blogging is encouraged, as noted last month by Scalefree in why professional service firms should blog. As far back as June 2003, in another IT Week column, Ken Young reported on why blogging is good for business and to quote part of the article:

    “Just like the use of instant messaging, personal publishing through weblogs is likely to get an icy reception in most firms concerned about security and ‘need-to-know’ issues. But smart companies will see that the advantages far outweigh the negatives. They will recognise that weblogging is a new form of knowledge management that has a vital time-based ingredients making it easy to see what another person is currently working on or discussing. If you need encouragement, bear in mind that weblogs are now a common feature in search results on Google”.

Blogging is big. That’s why so many of the major search engines now have a blogging presence – Google owns Blogger, Bloglines has been bought by Ask Jeeves and Microsoft (MSN Search) recently launched MSN Spaces.

But there is a dark side too – as Thomas Lee notes in his recent post on the dangers of blogging, some people may find it difficult to know what can and cannot be said without getting fired, leading to some bloggers being “dooced“. Blogger are even posting advice on how not to get fired because of your blog.

One example of an unlucky blogger is Ellen Simonetti (aka Queen of Sky), who was sacked for posting “inappropriate” pictures to her blog, diary of a flight attendant. From the pictures that I have seen, it appears that her employer took issue with the fact that she was wearing her company uniform and was on board one of their aeroplanes at the time that the pictures were taken. It seems to me (and remember that my blog entries are based on personal opinions and do not not necessarily reflect the views of my employer) that whilst invoking a disciplinary process may have been seen by her employers as necessary, firing her was complete overkill, especially as CNN reports that the same (cash-strapped) airline is now investing in sexier uniforms.

Quoting Anderson’s January comment in IT Week again:

    “Firms are just getting to grips with email privacy and appropriate use policies. Such policies should now be extended to blogs, before more people lose their jobs for breaching non-existent guidelines.”
    “It is also wise to consider the PR impact of sackings and litigation and of acknowledging problems and trying to fix them. The bottom line is that blogs work best for firms with nothing to hide. That means they help to drive up standards, which has to be good news.”

Luckily, my employer does have such a policy (and I have been conscious to follow it whilst writing this post on the pros and cons of blogging). For anyone thinking of instigating such a policy, Ray Ozzie’s Weblog has some useful advice and if you are thinking about starting out with a blog, check out digital diaries: the art of blogging.

IBM Rescue and Recovery with Rapid Restore

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the technologies that I’ve been working with recently is IBM’s Rescue and Recovery with Rapid Restore. Another of IBM’s ThinkVantage technologies, this is provided free of charge with an IBM PC (and can be licensed for other OEM’s PC models).

In essence, Rescue and Recovery writes a backup of the entire PC hard disk to either a hidden partition on the local hard drive, a second hard disk, recordable media, network drive, or to a USB device. The first backup is a base image, then subsequent backups are differential. Backups can be scheduled and up to 31 backups can be stored before overwriting. In a recovery scenario, the process is simply booting from a rescue CD, which is easily generated and is not machine specific, or pressing the Access IBM button on selected IBM PCs, then selecting the backup to use and the file(s) to be recovered. Individual files, or the entire system, may be recovered, even preserving selected data and logon credentials written since the last backup.

All configuration settings are stored in an easily edited text file with full product documentation including customisation available in PDF format.

PCs with airbags – no joke!

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

IBM’s ThinkPad range of notebook PCs have always had some useful (and unusual) features in the detail (collectively dubbed as ThinkVantage technologies). For example, the ThinkPad T40 which I was using a few days back had a reading light built into the top of the screen to illuminate the keyboard whilst working in the dark.

We’re all used to cars with airbags to protect the occupants in an impact, but IBM have added an airbag to selected notebook PC models (I kid you not!). Known as active protection system, the feature adds an integrated motion sensor that continuously monitors movement of the notebook to temporarily stop the hard disk if necessary, helping to avoid data loss.

Like an airbag’s sensor, the active protection system can detect sudden changes in motion and park the disk read/write heads within 500ms – helping to prevent head or disk damage (as hard disks are most susceptible to damage whilst active). Once stabilised, the heads return to the normal position and continue working as usual. The technology is even adjustable for environments where vibrations are normal (e.g. on a train).

Multiple factor security identification

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another interesting piece in this week’s IT Week was Neil Barrett’s comment article entitled age-old tactics create PC security. Prior to reading this, I had not quite grasped the principles that constitute multiple factor security identification – Barrett’s article gives an excellent explanation.