Putting PKI into practice

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I blogged about public/private key cryptography in plain(ish) English. That post was based on a session which I saw Microsoft UK’s Steve Lamb present. A couple of weeks back, I saw the follow-up session, where Steve put some of this into practice, securing websites, e-mail and files.

Before looking at my notes from Steve’s demos, it’s worth picking up on a few items that were either not covered in the previous post, or which could be useful in understanding what follows:

  • In the previous post, I described encryption using symmetric and asymmetric keys; however many real-world scenarios involve a hybrid approach. In such an approach, a random number generator is used to seed a symmetric session key which contains a shared secret. That key is then encrypted asymmetrically for distribution to one or more recipients. Once distributed, the (fast) symmetrical encrypted session key can be used.
  • If using the certificate authority (CA) built into Windows, it’s useful to note that this can be installed in standalone or enterprise mode. The difference is that enterprise mode is integrated with Active Directory, allowing the automatic publishing of certificates. In a real-world environment, multiple-tiers of CA would be used and standalone or enterprise CAs can both be used as either root or issuing CAs. It’s also worth noting that whilst the supplied certificate templates cannot be edited, they can be copied and the copies amended.
  • Whilst it is technically possible to have one public/private key pair and an associated certificate for multiple circumstances, there are often administrative purposes for separating them. For example, in life one would not have common keys for their house and their car, in case they needed to change one without giving access to the other away. Another analogy is to compare with single sign on, where it is convenient to have access to all systems, but may be more secure to have one set of access permissions per computer system.

The rest of this post describes the practical demonstrations which Steve gave for using PKI in the real world.

Securing a web site using HTTPS is relatively simple:

  1. Create the site.
  2. Enrol for a web server certificate (this authenticates the server to the client and it is important that the common name in the certificate matches the fully qualified server name, in order to prevent warnings on the client).
  3. Configure the web server to use secure sockets layer (SSL) for the site – either forced or allowed.

Sometimes web sites will be published via a firewall (such as ISA Server), which to the outside world would appear to be the web server. Using such an approach for an SSL-secured site, the certificate would be installed on the firewall (along with the corresponding private key). This has the advantage of letting intelligent application layer firewalls inspect inbound HTTPS traffic before passing it on to the web server (either as HTTP or HTTPS, possibly over IPSec). Each site is published by creating a listener – i.e. telling the firewall to listen on a particular port for traffic destined for a particular site. Sites cannot share a listener, so if one site is already listening on port 443, other sites will need to be allocated different port numbers.

Another common scenario is load-balancing. In such circumstances, the certificate would need to be installed on each server which appears to be the website. It’s important to note that some CAs may prohibit such actions in the licensing for a server certificate, in which case a certificate would need to be purchased for each server.

Interestingly, it is possible to publish a site using 0-bit SSL encryption – i.e. unencrypted but still appearing as secure (i.e. URL includes https:// and a padlock is displayed in the browser). Such a scenario is rare (at least among reputable websites), but is something to watch out for.

Configuring secure e-mail is also straightforward:

  1. Enrol for a user certificate (if using a Windows CA, this can either be achieved via a browser connection to http://servername/certsrv or by using the appropriate MMC snap-ins).
  2. Configure the e-mail client to use the certificate (which can be viewed within the web browser configuration, or using the appropriate MMC snap-in).
  3. When sending a message, opt to sign or encrypt e-mail (show icons).

Signed e-mail requires that the recipient trusts the issuer, but they do not necessarily need to have access to the issuing CA; however, this may be a reason to use an external CA to issue the certificate. Encrypted e-mail requires access to the user’s public key (in a certificate).

To demonstrate access to a tampered e-mail, Steve turned off Outlook’s cached mode and directly edited a message on the server (using the Exchange Server M: drive to edit a .EML file)

Another possible use for a PKI is in securing files. There are a number of levels of file access security, from BIOS passwords (problematic to manage); syskey mode 3 (useful for protecting access to notebook PCs – a system restore disk will be required for forgotten syskey passwords/lost key storage); good passwords/passphrases to mitigate known hash attacks; and finally the Windows encrypting file system (EFS).

EFS is transparent to both applications and users except that encrypted files are displayed as green in Windows Explorer (cf. blue compressed files). EFS is also unfeasible to break so recovery agents should be implemented. There are however some EFS “gotchas” to watch out for:

  • EFS is expensive in terms of CPU time, so may be best offloaded to hardware.
  • When using EFS with groups, if the group membership changes after the file is encrypted, new users are still denied access. Consequently using EFS with groups is not recommended.
  • EFS certificates should be backed up – with the keys! If there is no PKI or data recovery agent (DRA) then access to the files will be lost (UK readers should consider the consequences of the regulation of investigatory powers act 2000 if encrypted data cannot be recovered). Windows users can use the cipher /x command to store certificates and keys in a file (e.g. on a USB drive). Private keys can also be exported (in Windows) using the certificate export wizard.

Best practice indicates:

  • The DRA should be exported and removed from the computer.
  • Plain text shreds should be eliminated (e.g. using cipher /w to write 00 and FF to the disk at random).
  • Use an enterprise CA with automatic enrollment and a DRA configured via group policy.

More information can be found in Steve’s article on improving web security with encryption and firewall technologies in Microsoft’s November 2005 issue of TechNet magazine.

An introduction to voice telecommunications for IT professionals

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve invested a lot of time recently into learning about two of Microsoft’s collaboration products – Live Communications Server 2005 (real-time presence information, instant messaging and VoIP) and the forthcoming version of Exchange Server (e-mail, scheduling and unified messaging). Both of these products include telephony integration, and for many IT architects, designers and administrators that represents a whole new field of technology.

One of the things I found particularly useful at the Exchange Server “12” Ignite Tour (and which wasn’t covered by the NDA), was when Microsoft UK’s Stuart Clark talked about foundation concepts for unified messaging – i.e. an introduction to voice telecommunications for IT professionals, which I’ve reproduced here for anyone who may find it useful.

There are three main types of business phone system – Centrex, key telephone systems and private branch exchange (PBX) – of these, the PBX is by far and away the most common solution employed today.

A PBX can be thought of as a self-contained telephone system for one or more offices. It allows “internal” calls between telephone extensions (without the need for an operator) as well as allowing many internal extension numbers to share common trunk lines to the telephone company’s network, representing cost savings on outbound calls and making it less likely for an inbound call to be greeted with a busy tone. Typically each office (or campus) would have it’s own PBX, and these can be connected using tie lines to allow internal, cross-organisational calls, without using public telephone networks (i.e. just using internal extension numbers).

There are various types of PBX available:

  • Analogue PBXs retain all voice and signalling traffic as an analogue signal. Even touchtones (the “beeps” when a telephone key is pressed) are sent as analogue signals. These touchtones are technically known as dual-tone multi-frequency (DTMF) signals because they actually consist of two separate tones (one high, one low) played simultaneously – something that is highly unlikely to occur with a human voice – avoiding the possibility of confusion between control signals and conversation.
  • Digital PBXs first started to appear in the 1980s and encode analogue signals in a digital format (typically ITU G.711). Digital PBXs can also support the use of analogue trunk lines.
  • IP PBXs include network interface cards and use an organisation’s data network for voice traffic in the same manner as computers use the network for data transfer. The voice traffic is digitised and packet-switching technologies used to route calls across the LAN (and WAN).

Analogue and digital PBXs employ circuit switching technology and require the use of a gateway to connect with IP-based telephony systems. Hybrid PBXs combine digital and IP PBX technologies, allowing a more gradual migration to IP-based telephony. Many modern digital PBXs can be upgraded to hybrid PBXs; however some hybrid PBXs may still require a VoIP gateway (see below) to connect with other IP-based telephone systems due to protocol incompatibilities.

PBXs have a number of associated concepts:

  • Direct inward dialling (known as direct dial-in or DDI in Europe) is a technology used to assign external numbers to internal extensions, allowing internal extensions to be called from an external line without any operator intervention.
  • A dial plan is a set of rules employed by a PBX in order to determine the action to be taken with a call. For example, the dial plan would determine how many digits an internal extension number includes as well as settings such as the number required to dial an outside line and whether or not to allow international calls to be made. Internal extensions covered by the dial plan can call other internal extensions directly and, where a company has PBXs spanned across multiple geographic sites, the dial plan can be used to allow internal calls to be made between PBXs.
  • A hunt group is a group of extensions defined as a group around which the PBX hunts to find an available extension to which to route the call. For example, a hunt group may be used for an IT support desk and the call will be routed to the first available channel (extension).
  • A pilot number is used to identify a hunt group. It is a dummy extension without an associated person or phone, which acts as a number to dial when calling into a hunt group. In the example above, the number used to contact the IT support desk is the pilot number for the IT support hunt group. Another example pilot number would be a number used to access voicemail services. More than one pilot number might target the same group of extensions.
  • A coverage path determines how to route unanswered calls (e.g. divert to voicemail if not answered within 5 rings or if the line is busy).
  • Unsupervised transfer is the process of transferring a call to another extension without using the services of an operator, also known as a blind transfer.
  • Supervised transfer is the process of controlling a call detecting error conditions, detecting answer conditions, allowing reconnection to the original call and/or combining calls.

Having examined PBX technology, there are a number of other concepts to get to grips with in the world of IT and telephony integration.

Firstly, not all telephone lines are equal. Trunk and tie lines use high-bandwidth links that can simultaneously carry multiple channels (the term port is often used rather than channel, since it is at an entrance or exit to another system and/or network):

  • T1 lines have 24 channels with a data rate of 1.544Mbps, with either 23 or 24 channels available for voice transmission, depending on the protocol being used for signaling. T1 lines are found primarily in North America and Hong Kong.
  • E1 lines have 32 channels with a data rate of 2.048Mbps, with 30 channels available for voice transmission. Of the additional two channels, one is dedicated to timing information. The other channel is used for signaling information. E1-lines are used in Europe, Latin America and elsewhere.
  • J1 lines are found in Japan, including both 1.544 and 2.048Mbps technologies.

The channels provided by T1/E1/J1 lines are virtual channels – not separate physical wires but virtually created when the voice and/or data is put onto the line using a technology known as time-division multiplexing (TDM). Occasionally, analogue or ISDN lines may be used in place of T1/E1/J1 links. Fractional T1 and E1 lines, which allow customers to lease a portion of the line may be available when less capacity than a full T1 or E1 line is needed.

It’s also important to understand the differences between circuit-switched telephone networks and packet-switched data networks:

  • Historically, circuit switching is how phone calls have worked over traditional phone lines. In its simplest form, a circuit is started on one end, when a caller picks up the phone handset to dial a number (when a dial tone is presented). The circuit is completed when the recipient answers the call at the other end and the two parties have exclusive use of the circuit whilst the phone call takes place. Once the call is completed and both phones are hung up, the call is terminated and the circuit becomes available for use.
  • Packet-switching protocols are used by most data networks. With packet switching, the voice data is subdivided into small packets. Each packet of data is given its own identifying information and carries its own destination address (cf. addressing a letter for and sending it via a postal service). Packets can be transferred by different routes to reach the same destination.

One disadvantage of circuit-switching is that peak load capacity has to be provisioned but it cannot be repurposed at off-peak times. This means that excess capacity often sits idle. Packet-switching has the advantage that it can be repurposed at off-peak hours. The exclusive use of a given circuit is one of the big differences between circuit-switched telephony networks and packet-switched data networks. Packet-switching does not preallocate channels for each conversation and instead, multiple data streams are dynamically carried over the same physical network, making packet-switching more demand responsive.

Voice over IP (VoIP) is a term that describes the transmission of voice traffic over a data network using the Internet Protocol (IP). VoIP real-time protocols help to protect against packet loss and delay (otherwise known as jitter) attempting to achieving the levels of reliability and voice-quality previously experienced with traditional circuit-switched telephone calls. IP networks support the transport of multiple simultaneous calls.

As described earlier, some PBXs will require a gateway to provide protocol conversion between two incompatible networks (e.g. a VoIP gateway between a circuit-switched telephone network and a packet-switched data network). When connected to a digital PBX, the VoIP gateway functions as a multiplexer/de-multiplexer, converting TDM signals into a format suitable for packet-switching. When connected to an analogue PBX, the VoIP gateway has to convert the analogue voice signal into a digital voice signal between converting this into data packets. The IP packets are generally transmitted between the gateway and the PBX using a protocol called real time transport protocol (RTP), which uses small packet sizes to facilitate voice streaming at playback. IP PBXs use RTP for end-to-end communications. RTP is defined in IETF RFC 3550. Secure deployments can be enabled using secure RTP (SRTP), defined in IETF RFC 3711.

Session initiation protocol (SIP) is a real-time signalling protocol used to create, manipulate, and tear down interactive communication sessions on an IP network. The VoIP industry has adopted SIP as the signaling protocol of choice. SIP can be secured using transport layer security (TLS).

Although SIP is detailed in IETF RFC 3261 this remains a proposed standard and its implementation varies between vendors. One such variations is how SIP is mapped onto other protocols, in particular, transport layer protocols. Some map it to TCP, whilst others use UDP. These differences can make communications between VoIP applications challenging, possibly requiring a VoIP gateway to facilitate communications between two VoIP systems.

Finally, real-time facsimile (ITU T.38/IETF RFC 3362) is a fax transport protocol for the Internet. T.38 defines procedures for facsimile transmission, when a portion of the path includes an IP network. It is used when relaying a fax that originated on a voice line across an IP network in real time.

With the convergence of voice and data technologies, these terms are becoming ever more important for IT professionals to understand. Hopefully this post provides a starting point for anyone who’s struggling to get to grips with the technology.

Microsoft Operations Manager 2005 architecture

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, Microsoft UK’s Eileen Brown showed a MOM architecture slide as part of a presentation she was giving on monitoring Exchange Server. I found it a good introduction to the way that MOM 2005 works so I thought I’d reproduce it here:

MOM 2005 architecture

Gagging orders…

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Oh! The joys of legal agreements… for the next 2 days, I’m attending the Exchange Server “12” Ignite training tour and the first thing I’ve had to do on arrival is to sign a non-disclosure agreement (NDA) which prohibits me from reproducing or summarising any confidential information gained for the next 5 years! To be fair, these things are pretty standard and much of what I do at work is covered by one NDA or another, but it does effectively prevent me from writing about anything I learn on this course. I guess when the product is released to manufacturing, the information will cease to be confidential, but in the meantime I guess I’ll have to keep quiet about E12!

What I can say is that the bag provided as part of the delegate information pack reminds me a bit of my earliest experiences with messaging – my days a newspaper delivery boy.

Happy birthday Apple

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year, I wrote about Microsoft’s 30th anniversary – this time round it’s Apple.

Until recently just a niche player in the personal computer marketplace, the company founded in 1976 by Steve Jobs and Steve Wozniak (Woz) is doing better now than ever – and that’s nothing to do with the Macintosh PC line but with the company’s allegedly monopolistic online music sales tactics. According to Associated Press, Jobs is the “marketing whiz” behind the company (his return to the CEO spot a few years back certainly marked a turning point in the company’s fortunes) and Wozniak the “engineering genius” (I’ve heard Woz on TWiT – he sure loves his technology). Time will tell where Apple’s business model goes as a result of current court action but if Microsoft’s anything to go by, it won’t make too much difference.

Apple products are different – different because they look good. Why can’t all PCs look as good as a Mac Mini or a Power Mac? I’m one of the people who would pay a premium for a Macintosh – I really fancy a Mac Mini (if I can hook it up to a standard TV as my 32″ Sony Trinitron will probably outlive any affordable flat panel that I could buy today) and I reckon it might pass the wife approval factor (WAF) test for a position in the living room (my “black loud crap” has long since been confined to my den). I’m also a heathen because I would (at least try to) run Windows XP Media Center Edition and SUSE Linux on it… let’s just hope the current rumours of Windows running on a Mac turn out to be true!

Apple might not have achieved mass market domination in the PC world, but they sure have things sorted (at least for now) with digital media. Happy birthday Apple.

Why have some of my PageRanks dropped?

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It’s well known that the Google index is based on the PageRank system, which can be viewed using the Google Toolbar.

Google page rank

But something strange has happened on this blog – the main blog entry page has a PageRank of 5, the parent website has a PageRank of 4, but the PageRanks for most of the child pages have dropped to zero.

Now I know that posts have been a bit thin on the ground this month (I’ve been busy at work, as well as working on another website), but I can’t understand why the rankings have dropped. I found this when I was using the site search feature to find something that I knew I’d written, but it didn’t come up. Entering site:markwilson.co.uk as a Google search brings back 258 results, but this blog has nearly 500 entries, plus archive pages and the parent website – where have all the others gone? Some recent entries, like the one on Tesco’s VoIP Service, have a PageRank of zero but still come back on a search (at the time of writing, searching for Tesco VOIP brings back my blog as the third listed entry). Others just don’t appear in search results at all. Meanwhile some old posts have PageRanks of 2 or 3.

I know (from my website statistics) that Googlebot is still dropping by every now and again. So far this month it accounts for 3319 hits from at least 207 visits – I just can’t figure out why so many pages have a PageRank of zero (which seems to be a penalty rank, rather than “not ranked yet” marking).

I don’t deliberately try to manipulate my search rankings, but steady posting of content has seem my PageRank rise to a reasonable level. I just don’t understand why my second-level pages are not appearing in the index. The only thing I can think of is that it’s something to do with my new markwilson.it domain, which is linked from this blog, and which redirects back to a page at markwilson.co.uk (but that page has no link to the blog at the time of writing).

I’ve just checked the syntax of my robots.txt file (and corrected some errors, but they’ve been there for months if not years). I’ve also added rel="nofollow" to any links to the markwilson.it domain. Now, I guess I’ll just have resubmit my URL to Google and see what happens…

Public/private key cryptography in plain(ish) English

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Public key infrastructure (PKI) is one of those things that sounds like a good idea, but which I can never get my head around. It seems to involve so many terms to get to grips with and so, when Steve Lamb presented a “plain English” PKI session at Microsoft UK a few weeks back, I made sure that I was there.

Steve explained that a PKI can be used to secure e-mail (signed/encrypted messages), browsing (SSL authentication and encryption), code (authenticode), wireless network connectivity (PEAP and EAP-TLS), documents (rights management), networks (segmented with IPSec) and files (encrypted file system).

Before looking at PKI, it’s necessary to understand two forms of cryptography – symmetric and asymmetric. I described these last year in my introduction to IPSec post.

The important things to note about public key cryptography are that:

  • Knowledge of the encryption key doesn’t give knowledge of the decryption key.
  • The receiver of the information generates a pair of keys (either using a hardware security module or software) and publishes the private key in a directory.
  • What one key does, the other undoes – contrary to many texts, the information is not always encrypted with the recipients public key.

To some, this may sound like stating the obvious, but it is perfectly safe to publish a public key. In fact, that’s what a public key certificate does.

Having understood how a PKI is an asymmetric key distribution mechanism, we need a trust model to ensure that the public key really does belong to who it says it does. What if I were to generate a set of keys and publish the public key as my manager’s public key? Other people could send him information but he wouldn’t be able to read it because he wouldn’t have the private key; however I would have it – effectively I could read messages that were intended for my manager.

There are two potential methods to ensure that my manager’s public key really is his:

  • One could call him or meet with him and verify the fingerprint (hash) of the key, but that would be time consuming and is potentially error-prone.
  • Alternatively, one could employ a trusted third party to certify that the key really does belong to my manager by checking for a trusted digital signature on the key. The issue with this method is that the digital signature used to sign the key needs to be trusted too. Again, there are two methods of dealing with this:
    • A “web of trust” model, such as Phil Zimmermann‘s pretty good privacy (PGP) – upon which the GNU privacy guard (GPG) on Linux systems was built – where individuals digitally sign one another’s keys (and implicitly trust keys signed by friends/colleagues).
    • A trusted authority and “path of trust” model, using certificate authorities (CAs), where everyone trusts the root CA (e.g. VeriSign, Thawte, etc.) and the CA digitally signs the keys of anyone whose credentials have been checked using it’s published methods (producing a certificate). One CA may nominate another CA and they would automatically be trusted too, building a hierarchy of trust.

Most CAs will have multiple classes of trust, depending on the checks which have been performed. The class of the trust would normally be included within the certificate and the different levels of checking should be published in a document known as a certificate practice statement.

The analogy that I find useful here is one of writing and signing a cheque when paying for goods or services. I could write a cheque on any piece of paper, but the cheques that I write are trusted because they are written on my bank‘s paper – that bank is effectively a trusted CA. When I opened my account the bank would have performed various background checks on me and they also hold a reference of my signature, which can be checked against my cheques if required.

The padlock that indicates a secure website in most browsers also looks a bit like a handbag (UK English) or purse (US English)! The Internet Explorer 6 version looks like this Internet Explorer padlock and the Firefox 1.5 version is like this Firefox padlock. Steve Lamb has an analogy for users that I particularly like – “it’s safe to shop where you see the handbag”; however, it’s also important to note that the padlock (not really a handbag!) just means that SSL security is in use – it doesn’t mean that the site can automatically be trusted (it may be a phishing site) so it’s important to examine the certificate details by double clicking on the padlock.

Each verification method has its own advantages and disadvantages – web of trust can be considered more “trustworthy”, but it’s time-consuming and not well understood by the general public – CAs, whilst easy to deploy and manage, can be considered to be the tools of “Big Brother” and they have to be trusted implicitly.

Digital signatures work by calculating a short message digest (a hash) and encrypting this using the signatory’s private key, to provide a digital signature. The hash function should result in a unique output (although it’s theoretically possible that two messages could produce the same hash as a large volume of data is being represented by a smaller string) – the important point to note is that even the tiniest of changes will break the hash.

Creating a digital signature

Upon receipt, the recipient uses the signatory’s public key to decrypt the hash. Because the hash is generated using a one-way function, this cannot be expanded to access the data – instead, the data is transmitted with the signature and a new hash calculated by the recipient. If the two hashes match then the integrity of the message is proven. If not, then the message has almost certainly been tampered with (or at least damaged in transit).

Verifying a digital signature

Certificates are really just a method of publishing public keys (and guaranteeing their authenticity). The simplest certificate just contains information about the entity that is being certified to own a public key and the public key itself. The certificate is digitally signed by someone who is trusted – like a friend (for PGP) or a CA. Certificates are generally valid for a defined period (e.g. one year) and can be revoked using a certificate revocation list (CRL) or using the real-time equivalent, online certificate status protocol (OCSP). If the CRL or OCSP cannot be accessed, then a certificate is considered invalid. Certificates are analogous to a traditional passport in that a passport is issued by a trusted authority (e.g. the UK passport agency), is valid for a number of years and contains basic information about the holder as well as some form of identification (picture, signature, biometric data, etc.).

X.509 is the standard used for certificates, with version 3 supporting application-specific extensions, (e.g. authentication with certificates – the process that a browser will follow before displaying the padlock symbol to indicate that SSL is in use – authenticating the server to the client). Whether or not this certificate is issued by an external CA or an organisational (internal) CA is really a matter of choice between the level of trust placed in the certificate and how much the website owner is prepared to pay for a certificate (it’s unlikely that an external certificate will be required for a secure intranet site, whilst one may be expected for a major e-commerce site).

The SSL process works as follows:

  1. The browser (client) obtains the site (server) certificate.
  2. The digital signature is verified (so the client is sure that the public key really belongs to the site)
  3. To be sure that this is the actual site, not another site masquerading as the real site, the client challenges the server to encrypt a phrase. Because the server has the corresponding private key, it can encrypt the phrase and return it to the client.
  4. The client decrypts the phrase using the public key from the certificate – if the phrase matches the challenge, then the site is verified as authentic.

Most certificates can be considered safe – i.e. there is no need to protect them heavily as they only contain publicly available information. The certificate can be stored anywhere – in a file, on a USB token, on a memory-only smartcard, even printed; however private keys (and certificates that include them) are extremely vulnerable, requiring protected storage within the operating system or on a smartcard with cryptographic functionality (see below). Windows 2000 Server and Windows Server 2003 include a CA which can be used to issue and store certificates, especially within a company that is just looking to secure its own data. The Windows Server 2003 CA even supports auto-enrollment (i.e. where a certificate request is processed automatically), but what if the administrators within an organisation are not considered trustworthy? In that case, an external CA may be the only choice.

Most organisations use more than one root key for signing certificates. This is because it does not scale well, can be difficult to manage responsibility for in a large organisation and is dangerous if the key is compromised. Instead, certificate hierarchies can be established, with a CA root certificate at the top, and multiple levels of CA within the organisation. Typically the root CA is installed, then taken offline once the subordinate CAs have been installed. Because the root is offline, it cannot be compromised, which is important because complete trust is placed in the root CA. With this model, validating a certificate possibly involves validating a path of trust – essentially this is just checking the digital signature but it may be necessary to walk the path of all subordinate CAs until the root is reached (or a subordinate that is explicitly trusted). Cross certification is also possible by exporting and importing certificate paths between CA hierarchies.

The list of trusted root CAs increases with each Windows service pack. Some certificates can be obtained without payment, even those included in the list of Windows’ trusted root CAs. Whilst these are as valid as any other certificate, they are unlikely to have undergone such stringent checks and so the level of trust that can be placed in them may not be deemed sufficient by some organisations. If this is a concern, then it can be cleared down from within the browser, using group policy or via a script – the only client impact will be a (possibly confusing) message asking if the certificate issuer should be added to the list of trusted authorities when a site is accessed.

Smartcards are often perceived as a useful second factor for authentication purposes, but it’s useful to note that not all smartcards are equal. In fact, not all smartcards are smart! Some cards are really just a memory chip and are not recommended for storing a private key used to verify identity. More expensive smartcards are cryptographically enabled, meaning that the key never has to leave the smartcard, with all processing done on the smartcard chip. Additional protection can also be included (e.g. biometric measures) as well as self-destruction where the card is known to have been compromised.

It’s worth noting that in the UK, organisations that encrypt data and do not have the means to decrypt it can fall foul of the regulation of investigatory powers (RIP) act (2000). There is an alternative – leaving the keys in escrow – but that is tantamount to leaving the keys with the government. Instead, the recommended practice for managed environments with encryption is to store keys in a location that is encrypted with the key recovery operator’s key – that way the keys can be recovered by an authorised user, if required.

After attending Steve’s session, I came away feeling that maybe PKI is not so complex after all. Steve’s recommendations were to set up a test environment and investigate further; to minimise the scope of an initial implementation; and to read up on certificate practice and certificate practice statements (which should be viewed as being more important than the technology itself if defending the trustworthiness of a certificate in court).

For anyone implementing PKI in a Microsoft infrastructure, there’s more information on PKI at the Microsoft website.

The OSI reference model and how it relates to TCP/IP

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier today, whilst on a client site, waiting for a PC to rebuild (5 times – and I thought my desktop support days were over… maybe that’s why they should be…), I saw a diagram of the open systems interconnection (OSI) reference model pinned up above a desk. I’ve seen many OSI model representations over the years, but this one was probably the clearest example I’ve seen in a while (especially for TCP/IP), so I’ve reproduced it here:

MOM 2005 architecture

Configuring wireless Ethernet with SuSE Linux 10.0

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Alex and I were debating the pros and cons of various operating systems during our geekfest (working on my latest website project, in the pub) last weekend – he’s just bought a new Mac (and works with them all day), so, like most Mac users I know, he can’t see why anyone would possibly want to use anything else (not quite true, but that’s the gist of it). Meanwhile I sat down at his Mac and couldn’t even work the mouse properly to open up Firefox and pull some information off the ‘net. I complained that standard keyboard shortcuts didn’t work (I had to use the Apple key instead of control) and he said it’s because I only use Windows. I disputed that – I like GNOME on Linux – and the reason I like it is that it works like Windows, only better. It’s got a cleaner interface but most of the keyboard shortcuts that I know still work. But even Linux is not ready for complete newbies. It’s come a long way since I first used it back in 1993 – in fact it’s advancing at a tremendous pace – but even Linux Format magazine acknowledges this month that it needs to be approached “with an awareness that it takes time and patience to use properly”. Linux is not for everyone. I’ve got nearly 20 years of PC experience under my belt (12 years designing, supporting and implementing systems using Microsoft software), and I’m still struggling with Linux (although getting on much better since I spent last week learning a bit about Red Hat Enterprise Linux).

So, what’s the point of this rambing? Well, last night, after weeks of wrangling, I finally got a non-Windows operating system to connect to my wireless network. I gave up trying to do this on Solaris (after the Solaris NDIS wrapper toolkit failed to compile on my system and I couldn’t get a satisfactory answer to my post at the OpenSolaris.org forums) and instead went for a popular Linux distro (SuSE 10.0, which Novell very kindly sent me a copy of a few weeks back).

There are many reports on how to do this out there on the ‘net, but none of them worked exactly for me. What follows is what I did with SuSE 10.0 on an IBM ThinkPad T40, with a D-Link DWL-G630 PCMCIA card and a D-Link DWL-2000AP+ access point, configured to use WPA-PSK (TKIP) and proven to work using a selection of Windows clients.

SuSE 10.0 comes with packages for NdisWrapper (v1.2), wireless tools (v28 pre-8) and WPA supplicant (v0.4.4), I used YaST to check that these were all installed and located the netrt61g.inf, rt61.cat and rt61.sys files from the CD supplied with my network card. I don’t think the .cat file is required, but I copied them all to /tmp anyway.

Next, following the advice for installing NdisWrapper on SuSE Professional 9.2, I ran the following commands from a terminal window (logged in a root) to install the network card:

cd /tmp
ndiswrapper -i drivername.inf

In my case this was netrt61g.inf, and the response was Installing netrt61g. Next, I entered:

ndiswrapper -l

to check the status of the NDIS drivers and saw the response:

Installed ndis drivers:
netrt61g driver present

The next part is to load the module, using:

modprobe ndiswrapper

This doesn’t return a response, but using iwconfig should return details for a new interface (in my case it was wlan0). At this point, I hadn’t yet inserted the card, but all seemed fine with the card driver configuration.

I then used YaST to configure the new wlan0 interface (although I could have made the edits manually, YaST saves me from missing something). The instructions I followed used YaST to edit the system configuration (System, /etc/sysconfig Editor), although some settings need to be added into text files manually, so they might as well all be done that way:

  • Add MODULES_LOADED_ON_BOOT="ndiswrapper" to /etc/sysconfig/kernel
  • Add DHCLIENT_MODIFY_RESOLVE_CONF='yes' and DHCLIENT_SET_DEFAULT_ROUTE='yes' to /etc/sysconfig/network/ifcfg-wlan-wlan0

That should be it for a basic wireless Ethernet configuration (although it may also be necessary to set any other network interfaces to start on cable connection, on hotplug, etc., rather than at boot time). For those of us using a secure network, there’s still more to do as it’s necessary to configure WPA Supplicant. It should be as simple as configuring /etc/wpa_supplicant.conf, then issuing a few simple commands:

ifconfig wlan0 up
wpa_supplicant -Dndiswrapper -iwlan0 -c/etc/wpa_supplicant.conf -dd

Sadly, that didn’t work for me. Even now, I’m not sure that the contents of my /etc/wpa_supplicant.conf file are correct – that’s why I haven’t published them here; however it maybe useful to know that the package also includes command line (wpa_cli) and graphical (wpa_gui) utilities for troubleshooting and managing the interface. wpa_cli was pre-installed as part of the package on my system, but I didn’t get anywhere until I obtained wpa_gui from the latest stable release of wpa_supplicant (v0.4.8).

To do this, I needed to add the gcc (v4.0.2), gcc-c++ (v4.0.2) and qt3-devel (v3.3.4) packages to my installation, then compile and install wpa_gui using:

PATH=$QTDIR/bin:$PATH
make wpa_gui
cp wpa_gui /usr/sbin

Only after typing wpa_gui -iwlan0 was I able to scan for an AP and locate the available networks:

wpa_gui scanning for networks

Then I could connect using the appropriate WPA key:

wpa_gui scanning for networks

wpa_gui scanning for networks

The connection doesn’t last long (it drops a few seconds after the 4-way handshake shown above), but at least it seems I have a working configuration (if not a stable one…).

So, it wasn’t easy. In fact, I’d say that wireless support is one of Linux’s weak spots right now, not helped by the fact that the device manufacturers generally only support Windows. Even now, I have some issues – like that my connection drops and then I can’t re-establish it – but I think that might be an issue with using Windows drivers and NdisWrapper. At least I know that I can get a connection – and that’s a step in the right direction.

Why open source software is not really free

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s a common misconception that open source software is free – as in doesn’t cost anything – and conversely that proprietary software is expensive.

I’d often wondered how this was aligned with the sale of packaged distributions of free software (it turns out I’m not the only one – a UK trading standards department were also confused by the sale of Firefox CDs – thanks to Slashdot via Slashdot Review for causing me to laugh out loud about that one…). Actually, it turns out that open source software is only free as in free speech – not as in free of charge. Sometimes it is free of charge too, but the two most common open source licensing models (GNU and BSD) do not prohibit the sale of “free software”.

GNU (a recursive name – GNU’s Not Unix) is a project, started by Richard Stallman in 1984 to create a free Unix clone, managed by the free software foundation (GNU/Linux is the kernel developed as a result of that project). GNU’s definition of free software says in part:

    • The freedom to run the program, for any purpose (freedom 0).
    • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
    • The freedom to redistribute copies so you can help your neighbor (freedom 2).
    • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.
  1. “Free software is a matter of the users’ freedom to run, copy, distribute, study, change and improve the software. More precisely, it refers to four kinds of freedom, for the users of the software:A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere…

    …’free software’ does not mean ‘non-commercial’.”

The GNU general public license (GPL) encourages free software, but all enhancements and changes to GPL software must also be left as GPL. In effect, the software is free to enhance, but not necessarily free to purchase.

Where code is derived from the University of California at Berkeley BSD project, a separate licensing agreement applies. Many commercial software vendors prefer to use the BSD license, because it lets them wrap open source code up in a proprietary product. As Linux Format magazine paraphrased this month, “In a nutshell, the BSD licence says, ‘do what you like with the code – just don’t claim you wrote it'”. The BSD code would still be free, but the developers don’t have to release all of the source code for the entire product.

Whilst I’m writing about non-copyright licensing agreements, it’s worth mentioning creative commons. Not limited to software products, this is a licensing alternative to copyright for all creative works, building on the “all rights reserved” concept of traditional copyright to offer a voluntary “some rights reserved” approach.

I’m really interested in the rise of Linux as an alternative to Windows; however it’s not about stripping out software purchase costs. Purchasing a version of Linux with a predictable development cycle and a support entitlement (e.g. Red Hat Enterprise Linux or Novell/SUSE Linux Enterprise) can be just as (or even significantly more) expensive as a copy of Windows and management costs need to be considered too. For as long as the majority of IT organisations are geared up to provide almost exclusively Windows support, Linux support costs will be higher too.