An introduction to MPLS

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Mansoor Majeed gave a presentation last week about multi-protocol label switching (MPLS) to Conchango‘s Infrastructure Architecture community of practice. Mansoor doesn’t have a blog of his own, so I’m taking this opportunity to write a little bit about what I learnt.

I first came across MPLS when I was working for a magazine distribution company in Australia which had expensive frame relay links (running for thousands of kilometres) and were looking at using VPNs across the Internet. The main reason we didn’t go ahead was that it is not possible to ensure quality of service (QoS) for such connections but this is an example of where MPLS would provide similar advantages in terms of routing flexibility, at a lower cost than traditional point-to-point links.

MPLS is a scheme typically used to enhance an IP network, based on Cisco tag switching technology. With tag switching, a switch maintains a map of logical interfaces with a tag for each virtual LAN (VLAN), switching the tag to forward traffic to the appropriate interface. Cisco define tag switching as a:

“High-performance, packet-forwarding technology that integrates network layer (layer 3) routing and data link layer (layer 2) switching and provides scalable, high-speed switching in the network core. Tag switching is based on the concept of label swapping, in which packets or cells are assigned short, fixed-length labels that tell switching nodes how data should be forwarded.”

and MPLS as a:

    “Switching method that forwards IP traffic using a label. This label instructs the routers and the switches in the network where to forward the packets based on pre-established IP routing information.”

With MPLS, organisations use their existing network infrastructure to connect to the service provider’s MPLS network, over which services which require QoS can be provided to connect to remote sites.

Switches work at layer 2; routers at layer 3; and as can be seen from Cisco’s tag-switching definition above, MPLS crosses the boundaries of the two layers. MPLS allows traffic routing, combined with the ability to compute a path at source and to distribute information about network topology and attributes. The main constraint is that it uses the shortest path first (SPF) algorithm to calculate the path across the network.

MPLS works by label edge routers (LERs) on the incoming edge of the MPLS network adding an MPLS label to the top of each packet. This label is based on some criteria (e.g. destination IP address) and is then used to pass it through the subsequent label switching routers (LSRs). The LERs on the outgoing edge strip off the label before final delivery of the original packet.

Multi-protocol label switching (MPLS)

So why invest in MPLS? The main reason is the lower cost for higher performance (e.g. the figures I have seen suggest that bandwidth can be increased 250-500% for a comparable cost) but other advantages include scalability, guaranteed bandwidth, QoS and the fact that MPLS will integrate with any transport method (IP, ATM, frame relay, etc.). Other potential advantages are that the MPLS provider may also provide hosting services, allowing the a company’s public Internet connection to be hosted at the MPLS provider’s datacentre for a minimal cost (cf. the flexibility of managing services locally).

There are some potential disadvantages though, firstly around security (running confidential traffic across a service provider’s network – although this could be encrypted if required); but more significantly there is no partnership at the time of writing between service providers in different countries, so for example, QoS would not be available for UK customers once the traffic left the UK service provider’s network. Over time this may be overcome with a system of MPLS points of presence (PoPs).

Another possible growth area for MPLS is the expansion of voice over Ethernet (VoE) technology. This is not the same as voice over IP (VoIP), but provides a similar service, effectively linking the MPLS network to various telco’s PSTNs. At the moment, a company would typically route all voice traffic via the local telco’s PSTN exchange, with line rental charges per channel connection, per month/quarter. Using VoE, an IP gateway can be used to run voice traffic across the MPLS network up to the point where it needs to transfer to another carrier’s network, resulting in significant savings on the cost of line rental.

That’s just a flavour of what MPLS is about. For further reading, there is a Cisco white paper about MPLS traffic engineering, onestopclick has an MPLS buyers guide and for a view on what to watch out for, there is the Techworld don’t get caught out by MPLS article.

How to migrate users between Active Directory forests using ADMT when the source and target domain names are similar

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my clients is undertaking a domain consolidation process, moving to a new Active Directory (AD) forest called companyname.com with two child domains – emea.companyname.com and americas.companyname.com. All of the user accounts from the various NT 4.0 and Windows 2000 domains around the organisation are being migrated into this structure using the Active Directory migration tool (ADMT) but access will be required to resources in the legacy domains (at least in the short term).

This is all reasonably straightforward, or it was until my colleagues unearthed a a new domain in Belgium called companyname.be. Because ADMT is reliant on external (NT 4.0) trust relationships, which are established using NetBIOS names (not DNS), and because emea.companyname.com already has a Kerberos trust with companyname.com, it will not allow an external trust to be created with companyname.be. We don’t want to have to recreate the users and so I had to find a way around the problem. It’s not that complex – just a two step migration, but we also needed to confirm that the sIDHistory attribute for each user would remain in place if the account was migrated more than once (in order to maintain access to resources in the original domain).

To prove this, I created four virtual machines (which run very slowly when the host is a notebook PC with only 1Gb of RAM…) representing domain controllers as follows:

  • dc1.companyname.com
  • dc2.emea.companyname.com
  • dc3.companyname.be
  • dc4.transfer.local

I created three users in companyname.be (imaginatively named user1, user2, and user3) and a share called userdata (with some test files), to which users 1 and 2 had access and user 3 was denied access. I also disabled user2 and created a group to which all three users were added as members.

The intention was to transfer the user accounts from companyname.be to transfer.local, and then to perform a second migration from transfer.local to emea.companyname.com, maintaining the account status (enabled or disabled), group membership and passwords, and then using a connection to the files on the share as a test of migrated sIDHistory.

Once I had DNS resolving names across the three forests, I installed ADMT on dc4.transfer.local and migrated the users. ADMT is fairly straightforward to set up and run, but it is necessary to read and fully understand the requirements in Microsoft knowledge base article 326480, with the main points for my client’s scenario being:

  • The target domain needs to be running in Windows 2000 native mode or later (without this, the sIDHistory attribute does not exist).
  • The computer running ADMT must be a member of either the source or the target domain.
  • The source domain must trust the target domain (in order for user and group migration to take place).
  • Administrator rights are required in both the source and target domains (e.g. by adding the target domain’s Administrator account to the source domain’s Administrators group and vice versa).

There is also an (undocumented) requirement that a non-blank password must be used for the account used to run ADMT (because I was running on a dedicated test system I was originally using the Administrator accounts with blank passwords, which needed to be changed). There is also some troubleshooting information available in Microsoft knowledge base article 322970.

Once the above are in place, ADMT will complete some of the other requirements for user and group migration, namely:

  • Creating a new (empty) local group in the source domain named sourcedomainname$$$.
  • Enabling auditing for the success and failure of Audit account management on both domains in the Default Domain Controllers policy.
  • Configuring the source domain to allow RPC access to the SAM by setting HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\LSA\TcpipClientSupport to 1 on the PDC Emulator for the source domain and restarting that server.

Microsoft knowledge base article 326480 also describes the process for installing the password migration DLL in order to migrate user passwords.

It should be noted that the configuration items above expose serious security weaknesses which must be secured once the migration is completed.

Once the users (with passwords) and group (with membership) had been migrated once, I could install ADMT on dc2.emea.companyname.com and migrate them again, this time from transfer.local to emea.companyname.com.

Finally, to test that the sIDHistory attribute was allowing access to resources in the original source domain (companyname.be), I logged on to dc2.emea.companyname.com and started a command prompt, from which I could issue the following commands:

net use e: \\dc3\userdata /user:emea\user1 password

(i.e. a connection to a resource in the original companyname.be domain, using an account in the emea.companyname.com domain, which as expected, allowed access to the share as well as browsing subject to file permissions on the original resources).

net use e: /del
net use e: \\dc3\userdata /user:emea\user2 password

(as expected, access was denied as the account was disabled).

net use e: /del
net use e: \\dc3\userdata /user:emea\user3 password

(as expected, a connection was made, but access was denied when an attempt was made to browse the share due to denying permissions to user3 on resources in the companyname.be domain).

net use e: /del

All of this indicated a successful migration from companyname.be to emea.companyname.com and so I decided to look a bit closer at the sIDHistory attribute which allows all of this to work.

One of my colleagues had alerted me to an article on security concerns for migrations and upgrades to Windows Active Directory, which confirmed that a single user account might have numerous sIDHistory entries, depending on how many domains contained the user name before the migration and consolidation.

After installing the ADSI Edit support tool from the Windows 2000 media I could see that the sIDHistory attribute had two values on the emea.companyname.com version of the object (shown in the accompanying screenshot), a single value on the transfer.local version of the object and was not present on the original companyname.be version.

sIDHistory in ADSI Edit

Rather unhelpfully, the ADSI Edit representation of the sIDHistory is not in the usual form, but if the ldp.exe support tool is used, then by connecting to the server, binding as an administrator, and viewing the directory tree, the sIDHistory can be seen in its normal form along with the objectSID. As expected the objectSID for user1@companyname.be (S-15-78B99911-320A1743-74B49FF8-451) appeared as the sIDHistory attribute for user1@transfer.local and user1@emea.companyname.com had two values for sIDHistory – the original objectSID from user1@companyname.be (carried over in the sIDHistory from user1@transfer.local) and the objectSID from user1@transfer.local (S-15-11DA0ABB-64495118-320A1743-454).

sIDHistory in the LDAP editor

Incidentally, had this method not worked, my colleagues and I had identified two further potential methods of migrating the users and groups which I did not try:

  1. Install a second domain controller in companyname.be, let it replicate, take the original offline and seize the FSMO roles, then upgrade to Windows Server 2003 and rename the domain, allowing a one-step migration to to emea.companyname.com to be used. After this, the Windows Server 2003 domain controller could be taken offline and the original companyname.be Windows 2000 server brought back online (seen as a very complicated solution).
  2. Use the clone principal utility to clone the original accounts in the new domain (workable, but requiring some scripting skills and potentially a lot of time).

After many hours of waiting for virtual machines to catch up with my mouse/keyboard, I don’t think I’ll be trying them just yet…

Some clarity around Microsoft’s operating system release cycles

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I normally avoid blogging about Microsoft’s release plans for new technology as they tend to be out of date almost as soon as they are written; however, at last week’s Microsoft Technical Roadshow, John Howard gave one of the clearest examples I’ve ever seen of Microsoft’s plans for new operating system releases.

Microsoft aims to provide a major operating system release every four years with release updates approximately half way between major releases. For example, Windows Server 2003 was released on 28 March 2003, Windows Server 2003 R2 is expected during 2005 (delayed due to the late shipping of service pack 1) and the next version of Windows Server (codenamed Longhorn) can be expected in 2007. Following this pattern, we can expect an update to Longhorn in 2009 and the following version of the Windows Server product (codenamed Blackcomb) to make an appearance in 2011.

On the support side, mainstream service packs and updates will be provided for at least 5 years from the date of a major release (i.e. until 2008 for Windows Server 2003) with extended support available for a further 5 years.

No NAP until Longhorn

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year I commented that network access protection (NAP) had slipped from a planned feature pack for ISA Server 2004 to Windows Server 2003 Release 2 (R2). Well, it seems that has changed. Confirming what I wrote last March, when I blogged about the need for network segmentation and remediation, Steve Lamb commented at last week’s Microsoft Technical Roadshow that NAP will be a feature of the next version of Windows Server (codenamed Longhorn) and not in the R2 release scheduled for later this year.

Apparently the reasons for this are that NAP will require kernel mode changes (and there will be no kernel mode changes in R2) and the extra time will allow Microsoft and Cisco to ensure that NAP (Microsoft) and NAC (Cisco) play nicely together.

Until then we will have to make do with the network access quarantine controls (originally part of the Windows Server 2003 resource kit and productionised as part of the release of Windows Server 2003 service pack 1). The main differences are that network access quarantine control allows quarantining of inbound connections via the Windows routing and remote access service, but NAP will will support quarantine for wired and wireless LAN connections too.

How about this for a test system…

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In one of the SQL Server sessions at last week’s Microsoft Technical Roadshow, Michael Platt showed the first three minutes or so from an MSDN Channel 9 video. In it, we saw one of the systems at Microsoft’s labs in Redmond where ISVs and OEMs assist the SQL Server team with their performance testing and benchmarking – an HP Integrity Superdome system with 64 64-bit Intel Itanium 2 CPUs, 1Tb of RAM and a couple of thousand 18.2Gb disks. Why so many small disks? Apparently it’s about providing provide parallel reading capacity to increase the overall system throughput and hence run the CPUs at their limits.

The whole system cost in the region of $5.1m and the full details of the benchmark tests may be found on the transaction processing performance council website.

Interestingly, one of the problems encountered during the benchmarking was running out of power to spin up all of the disks and having to install a new power distribution unit at a cost of $250,000!

How Microsoft does IT

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Microsoft IT ShowcaseI was at a presentation last week where some interesting statistics were given about Microsoft’s own IT operations:

  • 300,000 devices for 92,000 users in 89 countries.
  • 7,000,000 remote connections per month.
  • 3,000,000 internal e-mails per day.
  • 100,000 e-mail accounts with 99.99% mailbox availability.
  • 1.7Tb SAP database (running on Microsoft SQL Server).

Now who says that Microsoft software doesn’t scale to enterprise levels? For anyone who has ever wondered how Microsoft runs its own IT, check out the Microsoft IT Showcase.

Crazy ringtones – could skins for smartphones be the next big thing?

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It had to happen – with music single sales falling and downloads incorporated into the official UK music charts (since 17 April 2005) one day a ringtone-derived music single (the inevitable evolution of a music single-derived ringtone) would outsell a major music act. Keni was outraged to hear the crazy frog ringtone on my phone today (and no, I didn’t buy it – my brother sent it to me via Bluetooth) but I’m amazed at just how much attention the crazy frog has generated, seeing as it all started off as a Swedish student imitating his mate’s two-stroke scooter (hmm… I do something like that with my son in the shopping trolley as we whizz ’round Tesco…).

What seems particularly strange is how people are petitioning to get this off our airwaves (even complaining to the UK advertising standards agency) – surely if a company has enough money to pay for this level of advertising (and if you’re going to make £10m from selling a single ringtone, that should be plenty), then let them do it – even the “no advertising here” BBC runs ads on its World Service and in the RHS Chelsea Flower Show coverage Alan Titchmarsh regularly mentions that the event is “supported by Merrill Lynch” (there goes the last of my street cred’).

The ringtone download market is growing at a phenomenal rate and according to The Independent, the typical £3 cost of a realtone is divided up as follows:

  • Music publishers 32p.
  • Content aggregators and distributors 64p.
  • Mobile operators 75p.
  • Record labels £1.29.

I was interested to hear Keni comment today that with the launch of the Windows Mobile 5.0 platform (formerly codenamed Magneto), the market for skins to customise smartphones could potentially be as large as the ringtone market (especially with the convergence of consumer-focused mobile phones and digital music players). We’ll have to see if that prediction comes true, but in the meantime I have to confess that I quite like the crazy frog… and the Nokia tune has been driving me mad for the last ten years.

How to take part in some time travel

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

So you thought that old version of your website was gone forever? It may have been a little naive of me, but I figured that once I put up a new version of my website, then that was it, the old one was overwritten.

Not so, it seems – today I stumbled across the Internet archive wayback machine, which is a service that allows people to enter a URL, select a date range, and then surf on an archived version of the website. Scarily, I was able to search on old versions of my website going back several years. Not everything is in there, it takes a while to load, many graphics are missing, and if a site wasn’t picked up by the Internet archive crawler then it just won’t appear, but how about seeing old versions of www.microsoft.com?

I guess this can be useful. For example, I used to work for a company called ICL. That name is long since consigned to the history books (they are now trading as Fujitsu Services), but it is still available on the wayback machine. I managed to find a press release from back when the BBC and ICL jointly announced BBC Online in September 1996; as well as what ICL was saying about millennium date compliance in the middle of 1997.

Most web administrators will know that they can control web crawlers (like the one behind the Internet archive) using a robots.txt file in the root of the site (there is even an online robots.txt generator). After the robots.txt file is loaded in the root of the webserver, the wayback machine can be forced to crawl the site, pick up the new file, and remove all documents.

Now it seems I need to go and update the robots.txt files on my websites…

Anyone worried about running Microsoft ISA Server as a firewall?

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few years just about every network administrator I’ve worked with has laughed at the idea of a Microsoft firewall in an enterprise environment (at least as a front line of defence – many organisations use Microsoft ISA Server behind another firewall). When forced by the American parent company to run Check Point FireWall-1 on a Windows platform instead of a Nokia appliance server, one of my ex-colleagues in the European subsidiary of a major fashion design, marketing and retail company was disgusted; but in all honesty, a well-patched and well-managed Windows system can just as secure as a well-patched Linux one (and conversely badly patched systems are badly patched, whoever the operating system vendor).

The Common Criteria Evaluation and Certification Scheme (CCS) is an independent third party evaluation and certification service for measuring the trustworthiness of IT security products, recognised by governments in Canada, the United States, United Kingdom, Netherlands, Germany and France.

Windows 2000 Professional, Server, and Advanced Server with service pack 3 and the hotfix described in Microsoft knowledge base article 326886 has been certified for common criteria evaluation assurance level (EAL) 4+; and ISA Server 2000 with service pack 1 and feature pack 1 (in firewall mode) has EAL 2 certification. According to Microsoft, Windows XP with service pack 2, Windows Server 2003 with service pack 1 and ISA Server 2004 are all undergoing EAL 4+ certification at present.

In addition, ICSA Labs tests firewall products against a standard yet evolving set of criteria and Microsoft ISA Server 2000 with service pack 1 running on Windows server 2000 with service pack 4 has been certified by ICSA. As a side note, for anyone looking at the area of firewalls, the ICSA firewall buyer’s guide is worth a read.

So it seems that a Windows server can be secure enough to run a firewall; and that Microsoft’s firewall product is also pretty secure. EAL 2 might not be the highest certification level, but if ISA Server 2004 achieves EAL 4+, then maybe all of those network administrators’ minds can be put to rest.

Things to ask your ISP before you sign up

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of days back, I came across a forum post on things to ask your ISP before you sign up – looks like some good advice to me.