Author: Mark Wilson

  • The return of WordPerfect?

    Back in my student days I used MS-DOS 5.0 and WordPerfect 5.1. It worked really well. Then I moved to Windows 3.1 and Word for Windows 2.0 (Windows versions of WordPerfect just never made the grade). Obviously I was not alone because over the intervening 12 or so years WordPerfect’s fortunes have not been good until recently when the product’s current owners, Corel, persuaded OEMs to ship WordPerfect products as low-cost alternative to Microsoft Works and Office on new PCs.

    Now the US Department of Justice (DoJ) is reported to have adopted WordPerfect Office 12 for its 50,000 users. The WinInfo Update reports that Corel has 20 million user worldwide, marketing WordPerfect for “its unique functionality, broad capabilities, and low price”.

    According to Corel, “WordPerfect Office 12 is a full-featured office productivity suite that includes word processing, spreadsheet, presentation, and address book applications”. Because it is compatible with popular file formats, including Microsoft Office and Adobe PDF, WordPerfect Office 12 users can interoperate with users of other applications and, unlike open-source office productivity alternatives such as OpenOffice.org, Corel provides support for WordPerfect.

    But the killer (from a licensing perspective) is that Corel gives WordPerfect corporate licensees home and laptop privileges so they can install the same copy of the product at home and on a laptop in addition to a desktop computer.

    Microsoft Office is still a highly profitable product for Microsoft and looks unlikely to be usurped from its top spot but with new releases of Windows running late giving Linux the opportunity to build its market share, Firefox rising in popularity (IE’s share now reported to be down to 87%), and new threats in the office productivity space, Microsoft needs to work hard to remain competitive and protect its margins. Competition is back, which is no bad thing, but there could be interesting times ahead.

  • Linux creator switches to the Mac… nearly

    This one made me laugh when I read it in the Windows IT Pro magazine network WinInfo Daily Update:

    “The Macintosh community was agog this week at news that Linux creator Linus Torvalds has ‘switched’ to the Mac, but the truth, as is so often the case, is so much less exciting than the rumours. Torvalds is indeed using a Power Mac G5 tower, but some unnamed corporation gave it to him as a gift. And he’s running Linux on the box, not Mac OS X. ‘It obviously runs only Linux, so I don’t think you can call it a Mac any more,’ Linus noted. ‘And … I got the machine for free.’ So much for Apple’s highest-profile switcher.”

  • New security guidance for consumers and business

    Thomas Lee recently blogged about UK government’s security awareness website which is intended to “provide both home users and small businesses with proven, plain English advice to help protect computers, mobile phones and other devices from malicious attack”.

    The government hopes the service will help boost confidence in e-commerce, and at the same time protect national security but the trouble is, that I have only heard about it on Thomas’ blog, and in a recent article by David Neal, home users will bodge DIY security, which appeared in IT Week. As Neal points out, there has been no high profile coverage and consumers are not likely to be aware of the new initiative. He goes on to say that even “plain English… will go over the heads of most users” and that “giving someone advice on tinkering with their firewall, updating their virus definitions, rebooting in safe mode and checking their proxy settings is as dangerous as arming everyone in the country with a shotgun, just because there has been a spate of burglaries”- an interesting view, and no doubt intended to be provocative, but nevertheless an opportunity for many small IT businesses consulting to the SOHO and low-end SME marketplace.

    Meanwhile, for larger businesses, the Information Security Forum (ISF) has issued updated guidelines in the form of the standard of good practice for information security v4.1 incorporating updated sections in areas that have been the subject of additional research and investigation including:

    • Information risk management in corporate governance.
    • Virus protection in practice.
    • Securing instant messaging.
    • Managing privacy.
    • Information risk analysis methodologies.
    • Patch management.
    • Managing the information risks from outsourcing.
    • Web server security.
    • Disappearance of the network boundary.
    • Feedback from the results for the ISF’s information security status survey.
  • The many uses for RFID

    There’s been a lot of talk about radio frequency identification (RFID) in the IT press recently. For a technology that has been around in various forms since the second world war, its taken a long time to come to market (OK, that’s not strictly true it’s employed within the ID cards that many of us use to access our office buildings, and for Londoners with the strangely named Oyster Card, which is the largest smartcard payment system in the UK, excluding credit and debit cards) but now that RFID transmitters are tiny enough to embed in just about anything, some large organisations are starting to wake up to the potential uses of this technology.

    Some of the uses I’ve seen for using RFID in the press over the last couple of weeks include:

    RFID is a technology which has the potential to enable enterprises to know every move of every product and service. To privacy campaigners that sounds scary (yeah right, so you have a mobile phone? If so, then your location can already be tracked by the authorities) and the European Union is conducting a public consultation looking at concerns over data protection and how the technology is being used. To me, it sounds scary for another reason – the sheer volume of data that needs to be managed!

    The success of RFID deployments is likely to be linked to a network’s ability to handle the data intelligently and securely, according to an IDC report (not surprisingly commissioned by Cisco), predicting that RFID will have a significant impact on enterprise networks not just because of the number of tags involved, but because of the amount of data each tag could hold and the number of times it is scanned during transit or processing.

    I recently read an excellent article in Enterprise Server Magazine (now renamed Server Management), contributed by Mark Palmer and entitled “Making Meanings”. I could not find it online, but the nice people at ObjectStore were happy to send me a copy, which I can’t publish here (for copyright reasons), but which I’m sure they would send to anyone else who is interested. In the article, Palmer sets out seven principles for the effective management of RFID data:

    1. Digest RFID event data close to the source of the RFID activity (i.e. convert from many raw events to a collection of meaningful events) to ensure greater reliability and protect the IT infrastructure.
    2. Whether or not a complex event processing (CEP) tool is used or one is built specially, the principle is the same – to turn simple events into meaningful ones in order to derive knowledge on which actions may be taken.
    3. Data concentrators can be used to achieve reliable speed, by buffering event stream flows, combining RFID middleware, event processing and in-memory data cache.
    4. RFID event data can be processed in context by caching reference data.
    5. Federate data distribution so the RFID system can scale and yet still provide information in near real time.
    6. Age RFID data to keep the working set manageable, enrich raw data with context and reduce the load on downstream systems.
    7. Automate exception handling to improved overall business efficiency.

    Another area which needs to be addressed for RFID to take off is that of standards – many of the existing standards are US-based and some experts would like to see the RFID electronic product code (EPC) standards body work with the International Organization for Standardization (ISO), so that EPC can focus on product codes and ISO on frequency.

    In the meantime, the Computer Technology Industry Association (CompTIA) which runs the A+ and Network+ certifications is said to be developing a certification scheme for RFID skills.

    Microsoft plans to launch its RFID services platform in 2006.

  • Ozzie’s Groove is snapped up by Microsoft

    In my recent post which discussed the perils of blogging I linked to Ray Ozzie’s Weblog . This week, I was interested to read that Ray Ozzie was actually the creator of Lotus Notes and that his company, Groove Networks is to be acquired by Microsoft (who have long since been investors in the firm) and integrated into Microsoft’s Information Worker unit.

    The Windows IT Pro magazine network WinInfo Daily Update reports that the peer-to-peer and authentication technologies from Groove’s collaboration products will be integrated into the next generation of Windows, (codenamed Longhorn).

    As for Ray Ozzie, he will become one of Microsoft’s chief technology officers, reporting directly to Bill Gates.

  • New e-mail message continuity services

    I’ve just read about a new message continuity service from FrontBridge, designed to provide always on e-mail in today’s environment where e-mail outage is seen as a major business continuity issue.

    Complementing the other e-mail managed services offered by FrontBridge, Active Message Continuity provides:

    • Always on e-mail continuity and disaster recovery with no need to “flip a switch”.
    • Interception-based archiving to capture messages “in stream” after filtering for spam, viruses and other unwanted content.
    • Continuous access via a web interface.
    • A fully managed service, starting from $1/month/user.

    FrontBridge is already well established in the e-mail application service provider (ASP) market, but this new product is a key differentiator allowing FrontBridge to offer message compliance, message security and message continuity at a time when competitors such as MessageLabs are concentrating on just one area – that of message security (anti-virus, anti-spam and content control).

  • The new face of spam

    We are all used to spam arriving in our e-mail inboxes, but now the problem is spreading to other communications methods.

    Research by Wireless Services Corporation shows almost half of the mobile phone text messages received in the US are spam, compared with 18% a year ago. Another problem is the growing menace of spam over instant messaging (spim), with Meta Group reporting 28% of instant messaging users hit by spim.

    Meanwhile, IT managers are turning to new methods of trapping e-mail-born spam at the network edge. According to e-mail security provider Postini, 88% of e-mail is spam and Symantec reports 70% (their Brightmail Antispam product is used by ASPs such as MessageLabs) with 80% from overseas, particularly China and Russia. Appliance servers are now available that claim to trap “dark traffic” such as unwanted inbound SMTP traffic, directory harvest and e-mail denial of service (DoS) attacks, malformed and invalid recipient addresses.

    Last month, Microsoft acquired Sybari and according to IT Week, the Sybari tools are likely to be offered as a plug in for the virus-scanning API in Exchange Server 2003 service pack 1, as well as part of Microsoft’s plans to offer edge services in forthcoming Exchange Server releases, including Sender ID e-mail authentication in Exchange Server service pack 2, IP safe lists, and a requirement for senders to solve a computational puzzle for each e-mail sent, increasing overheads for spammers (and unfortunately for the rest of us too).

    Some industry commentators criticise the use of filtering products, citing examples of blocked legitimate e-mail. Sadly this will always be the case (one of my wife’s potential customers once claimed that her domain name pr-co.co.uk is invalid, blocking all addresses containing hyphens) and many of my clients (wisely, if in a somewhat draconian style in some cases) block various attachment types. A few weeks back, even a reply which I sent to a request for assistance left on this blog was picked up as spam. There will always be a trade off between false positives and a small amount of spam getting through – what is needed is for a real person to double check the filtered e-mail, combined with an overall increase in the use of digitally signed e-mail.

    Links

    Practical measures for combating spam (MessageLabs)

  • Securing the network using Microsoft ISA Server 2004

    Several months ago, I attended a Microsoft TechNet UK event where the topic was ISA Server 2004 network design/troubleshooting and inside application layer firewalling and filtering. It’s taken me a while to get around to writing up the notes, but finally, here they are, with some additional information that I found in other some similar presentations added for good measure.

    The need for network security

    The Internet is built on internetworking protocol (IP) version 4 – which was not designed with security in mind. In the early days of the Internet, security clearance was required for access – i.e. physical access was restricted – so there was no requirement for protocol security. At that time (at the height of the cold war), resistance to nuclear attack was more important than protecting traffic and everyone on the network was trusted. The same networking technologies used to create the Internet (the TCP/IP protocol suite) are now used for internal networks and for TCP/IP, security was an afterthought.

    Security should never be seen as a separate element of a solution – it should all pervasive. At the heart of the process should be a strategy of defence in depth – not just securing the perimeter or deploying some access controls internally, but placing security throughout the network so there are several layers to thwart malware or a hacker. Ideally, an administrator’s security strategy toolbox should include:

    • Perimeter defences (packet filtering, stateful packet inspection, intrusion detection).
    • Network defences (VLAN access control lists, internal firewall, auditing, intrusion detection).
    • Host defences (server hardening, host intrusion detection, IPSec filtering, auditing, active directory).
    • Application defences (anti-virus, content scanning, URL switching source, secure IIS, secure Exchange).
    • Data and resource defences (ACLs, EFS, anti-virus, active directory).

    Each layer of defence should be designed on the assumption that all prior layers have failed.

    With users becoming ever more mobile, defining the edge of the network is becoming ever more difficult. Firewalls are no panacea, but properly configured firewalls and border routers are the cornerstone of perimeter security. The Internet and mobility have increased security risks, with virtual private networks (VPNs) softening the perimeter and wireless networks further eroding the traditional concept of the network perimeter.

    A firewall alone is not enough

    Some administrators take the view that “we’ve got a firewall, so everything is fine”, but standard (layer 3/4) firewalls check only basic packet information and treat the data segment of the packet as a black box. This is analogous to looking at the number and destination displayed on the front of a bus, but not being concerned with the passengers on board. Performance is often cited as a the reason for not implementing application layer (layer 7) firewalls, which inspect the data segment (e.g. for mail attachment checking, HTTP syntax, DNS syntax, correct SSL termination, URL blocking and redirection, RPC awareness, LDAP, SQL, etc.). However, Microsoft claim to have tested Internet Security and Acceleration (ISA) Server 2004 up to 1.9Gbps throughput on a single server with application filters in place (at a time when most corporates are dealing with 2-10Mbps).

    Consider the standard security pitch, which has two elements:

    1. The sky is falling (i.e. we’re all doomed).
    2. Our product will fix it (i.e. buy our product).

    In truth, no system is 100% effective and the firewall needs to be supplemented with countermeasures at various depths (intrusion detection systems, etc.). If there was a 100% secure system it would be incredibly expensive – in addition, threats and vulnerabilities are constantly evolving, which leaves systems vulnerable until a new attack is known and a new signature created and distributed. Heuristical systems must be supplemented with behavioural systems, and some intelligence.

    Just because 100% security is not achievable, it doesn’t mean that it is any less worthwhile as a goal. We still lock our car doors and install immobilisers, even though a good car thief can defeat them eventually. The point is that we stop the casual attacker, buying time. Taking another analogy, bank safes are sold on how long it will take a safe cracker to break them.

    Whatever solution is implemented, a firewall cannot protect against:

    • Malicious traffic passed on open ports and not inspected at the application layer by the firewall.
    • Any traffic that passes through an encrypted tunnel or session.
    • Attacks on a network that has already been penetrated from within.
    • Traffic that appears legitimate.
    • Users and administrators who intentionally or accidentally install viruses or other malware.Administrators who use weak passwords.

    HTTP is the universal firewall bypass and avoidance protocol

    In the late 1990s, as business use of the Internet exploded, we became to rely ever more on HTTP, which has earned itself a nickname – UFBAP – the universal firewall bypass and avoidance protocol.

    Firewall administrators are obsessed with port blocking and so all non-essential firewall ports are closed; but we generally assume that HTTP is good and so TCP port 80 (the default port for HTTP) is left open. Because it’s so difficult to get an administrator to open a port, developers avoid such restrictions by writing applications that tunnel over port 80. We even have a name for it (web services) and some of our corporate applications make use of it (e.g. RPC over HTTP for Outlook connecting to Exchange Server 2003).

    This tunnelling approach is risky. When someone encapsulates one form of data inside another packet, we tend to allow it, without worrying about what the real purpose of the traffic is. There are even websites which exploit this (e.g. HTTP-Tunnel), allowing blocked traffic such terminal server traffic using the remote desktop protocol (RDP) to be sent to the required server via TCP port 80, for a few dollars a month.

    In short, organisations, tend to be more concerned with blocking undesirable sites (by destination) than checking that the content is valid (by deep inspection).

    Using web services such as RPC over HTTP to access Exchange Server 2003 is not always bad – 90% of VPN users just want to get to their e-mail and so offering an HTTP-based solution can eliminate many of the VPNs that are vulnerable network entry points – what is required is to examine the data inside the HTTP tunnel and only allowing it to be used under certain scenarios. Taking the Exchange Server 2003 example further, without using RPC over HTTP, the following ports may need to be opened for access:

    • TCP 25: SMTP.
    • TCP/UDP 53: DNS.
    • TCP 80: HTTP.
    • TCP/UDP 88: Kerberos.
    • TCP 110: POP3.
    • TCP 135: RPC endpoint mapper.
    • TCP 143: IMAP4.
    • TCP/UDP 389: LDAP (to directory service).
    • TCP 691: Link state algorithm routing protocol.
    • TCP 1024+: RPC service ports (unless DC and Exchange restricted).
    • TCP 3268: LDAP (to global catalog).

    Using HTTP over RPC, this is reduced to one port – TCP 80.

    Application layer filtering

    Inspection at the application layer still has some limitations and the real issue is understanding the purpose of the traffic to be filtered and blocking non-consistent traffic.

    Microsoft ISA Server 2004 is typically deployed in one of three scenarios:

    • Inbound access control and VPN server.
    • Outbound access control and filtration (together with URL-based real time lists from third parties).
    • Distributed caching (proxy server), leading to reduced bandwidth usage.

    As part of its access control capabilities, ISA Server has a number of application filters included:

    • HTTP (syntax analysis and signature blocking).
    • OWA (forms based authentication).
    • SMTP (command and message filtering).
    • RPC (interface blocking).
    • FTP (read only support).
    • DNS (intrusion detection).
    • POP3 (intrusion detection).
    • H.323 (allows H.323 traffic).
    • MMS (enabled Microsoft media streaming).

    All of these filters validate protocols for RFC compliance and enable network address translation (NAT) traversal. In addition, ISA Server can work with third party filters to avoid the need for a proliferation of dedicated appliance servers (and even for appliance consolidation). Examples of third-party filter add-ons include:

    • Instant messaging (Akonix).
    • SOCKS5 (CornerPost Software).
    • SOAP/raw XML (Forum Systems).
    • Antivirus (McAfee, GFI, Panda).
    • URL Filtering (SurfControl, Futuresoft, FilterLogix, Cerberian, Wavecrest).
    • Intrusion detection (ISS, GFI).

    But appliance firewalls are more secure – aren’t they?

    Contrary to popular belief, appliance firewalls are not necessarily more secure – just more convenient – for those who prefer to use appliances, ISA Server is available in an appliance server format and such an appliance may well be cheaper than an equivalent server, plus Windows Server 2003 and ISA Server 2004 licenses.

    Whilst looking at the security of the solution itself, ISA Server has been tested against the common certification criteria at level EAL4+ (for 9 streams). Totally rewritten since ISA Server 2000, Microsoft claim that ISA Server 2004 uses a code base which is 400% more efficient. It may run on a Windows platform, but Windows Server 2003 can (and should) also be hardened, and a well-configured ISA Server can be extremely secure.

    Some firewall challenges: remote procedure calls (RPCs)

    RPC CommunicationsRPCs present their own challenge to a standard (layer 3/4) firewall in terms of the sheer number of potentially available ports:

    1. On service startup, the RPC server grabs random high port numbers and maintains a table, mapping UUIDs to port numbers.
    2. Clients know the UUID of the required service and connect to the server’s port mapper using TCP port 135, requesting the number of the port associated with the UUID.
    3. The server looks up the port number of the given UUID.
    4. The server responds with the port number, closing the TCP connection on port 135.
    5. From this point on the client accesses the application using the allocated port number.

    Due to the number of potential ports, this is not feasible using a traditional firewall (it would require 64512 high ports plus 135 to be open); however, a layer 7 firewall could utilise an RPC filter to learn the protocol and use its features to improved security, such that the firewall would only allow access to specific UUIDs (e.g. domain controller replication, or Exchange/Outlook RPC communications) denying access to all other RPC requests. Instead of tunnelling within HTTP (prevented by an HTTP syntax check), native RPC access can be provided across the firewall.

    Some firewall challenges: secure sockets layer (SSL)

    Protecting HTTPS - traditional firewallHackers will always attack the low-hanging fruit (i.e. easy targets) and as such, SSL attacks are generally too complex, but as our systems become more secure (i.e. we remove the low-hanging fruit), SSL attacks will become more likely.

    HTTPS (which uses SSL) prompts a user for authentication and any user on the Internet can access the authentication prompt. SSL tunnels through traditional firewalls because it is encrypted, in turn, allowing viruses and worms to pass through undetected and infect internal servers.

    Using ISA Server 2004 with an HTTP filter, authentication can be delegated. In this way, ISA Server pre-authenticates users, eliminating multiple authentication dialogs and only allowing valid traffic through. This means that the SSL connection is from the client to the firewall only, and that ISA Server can decrypt and inspect SSL traffic. Onward traffic to the internal server can be re-encrypted using SSL, or sent as clear HTTP. In this way, URLScan for ISA Server can stop web attacks at the network edge, even over an encrypted inbound SSL connection.

    Protecting HTTPS - web publishingPre-authentication means that without a valid layer 7 password, there is no access to any internal systems (potential attackers drop from the entire Internet to just the number of people with credentials for the network). ISA Server 2000 can also perform this using RSA securID for HTTP (although not for RPC over HTTP with securID) and cookie pre-authentication for Outlook Web Access 2003 is also available.

    Some firewall challenges: protecting HTTP(S)

    Protecting HTTP (and HTTPS) requires an understanding the protocol – how it works, what its rules are and what to expect. Inbound HTTPS termination is easy (as the certificate is controlled by the organisation whose network is being protected). For outbound HTTPS and HTTP, administrators need to learn how to filter port 80/443. It may be worth considering whether global access is really required, or whether there are a set of specific sites that are required for use by the business.

    ISA Server allows web publishing of HTTP (as well as other protocols such as SMTP). Web publishing protects servers through two main defences:

    • Worms rarely work by FQDN – tending to favour IP or network range. Publishing by FQDN prevents any traffic from getting in unless it asks for the exact URL and not just http://81.171.168.73:80.
    • Using HTTP filter verbs (signature strings and method blocking) to eliminate whole classes of attack at the protocol level.

    Some examples of protecting a web server using web publishing and HTTP filtration are:

    • Limit header length, query and URL length.
    • Verify normalisation – http://81.171.168.73/../../etc is not allowed.
    • Allow only specified methods (GET, HEAD, POST, etc.).
    • Block specified extensions (.EXE, .BAT, .CMD, .COM, .HTW, .IDA, .IDQ, .HTR, .IDC, .SHTM, .SHTML, .STM, .PRINTER, .INI, .LOG, .POL, .DAT, etc.)
    • Block content containing URL requests with certain signatures (.., ./, \, :, % and &)
    • Change/remove headers to provide disinformation – putting ISA Server in front of an Apache server is a great way to prevent UNIX attacks by making hackers think they are attacking a Windows server.
    • Block applications based on the header.

    Some headers to look for include:

    • Request headers:
      • MSN Messenger: HTTP header=User-Agent; Signature=MSN Messenger
      • Windows Messenger: HTTP header=User-Agent; Signature=MSMSGS
      • AOL Messenger (and Gecko browsers): HTTP header=User-Agent; Signature=Gecko/
      • Yahoo Messenger: HTTP header=Host; Signature=msg.yahoo.com
      • Kazaa: HTTP header=P2P-Agent; Signature=Kazaa, Kazaaclient
      • Kazaa: HTTP header=User-Agent; Signature=Kazaa Client
      • Kazaa: HTTP header=X-Kazaa-Network; Signature=KaZaA
      • Gnutella: HTTP header=User-Agent; Signature=Gnutella
      • Gnutella: HTTP header=User-Agent; Signature=Gnucleus
      • Edonkey: HTTP header=User-Agent; Signature=e2dk
    • Response header:
      • Morpheus: HTTP header=Server; Signature=Morpheus

    Some firewall challenges: protecting DNS

    Whilst some DNS protection is available by filtering TCP/UDP ports 53, ISA Server filters can examine traffic for DNS host name overflows, length overflows, zone transfer from privileged ports (1-1023), zone transfer from high ports (1024 and above).

    Some firewall challenges: protecting SMTP

    When it comes to mail protection, anti-spam and anti-virus vendors cover SMTP relays but ISA server filters can examine protocol usage, i.e.:

    • Checking that TCP port 25 traffic really is SMTP.
    • Checking for a buffer overflow to the RCPT: command.
    • Blocking someone using the VRFY command.
    • Stripping an attachment or block a user.

    Using such a solution adds to the defence in depth strategy, using the firewall to add another layer of defence to the mail system.

    Some firewall challenges: encapsulated traffic

    Encapsulated traffic can cause some concerns for a network administrator as IPSec (AH and ESP), PPTP, etc. cannot be scanned at the ISA Server if they are published or otherwise allowed through. Tunnelling traffic will be logged, but not scanned as ISA cannot look inside the tunnel unless it is terminating the VPN. The administrator is faced with a choice – open more ports and uses application filters – or tunnel traffic without inspection. NAT also has some implications.

    ISA Server can, however, perform intra-tunnel VPN inspection, so VPN traffic can be inspected at the application layer. VPN client traffic is treated as a dedicated network so destinations can be controlled, along with the use of application filter rules.

    VPN clients must be hardened. If not, then hackers can attack clients and ride the VPN into the corporate network. Client based intrusion detection systems and firewalls can help but the ideal solution is VPN quarantine (e.g. Windows Server 2003 network access quarantine control) as the most common entry to the network for malware is from mobile devices either VPNing into the network, or returning to the network after being infected whilst away from the network (perhaps connected to other networks, including the Internet).

    Alternatives to a VPN that should be considered are:

    • E-mail: RPC over HTTP, or Outlook Web Access (OWA). POP3 and IMAP4 should be avoided as they are not fully featured.
    • Web-enabled extranet applications: SSL.
    • Other applications: RPC filtration with ISA Server.

    Don’t forget the internal network

    Internal network defences are another factor to be considered. Networks are generally one large TCP/IP space, segmented by firewalls to the Internet. Trust is implicit throughout the organisation but this cannot be relied upon and network segmentation is critical (cf. a bank, where entering a branch does not gain access to the vault). Internal users are dangerous too.

    • The windows firewall in Windows XP SP2 (internet connection firewall in Windows Server 2003 and earlier versions of Windows XP) is a vital tool in preventing network-based attacks, by blocking unsolicited inbound traffic. Ports can be opened for services running on the computer, and enterprise administration facilitated through group policy. Microsoft recommend that use of the Windows Firewall is combined with network access quarantine control; however it does not have any egress filters (i.e. controls over outbound traffic).
    • Virtual LANs (VLANs) can be used to isolate like services from one another. Switch ACLs are used to control traffic flow between VLANs at layer 3. Layer 2 VLANS may be used where no routing is desired. By using internal firewalls, port level access can be controlled to internal VLANs.
    • IPSec is a method of securing internal IP traffic, mutually authenticating end points. It is used to ensure encrypted and authenticated communications at the IP layer, providing a transport layer security that is independent of applications or application layer protocols. It prevents against spoofing, tampering in the wire and information disclosure. Mutual device authentication can be provided using certificates, kerberos (or pre-shared key – but this is only recommended for testing scenarios). Authentication headers (AH) should be used to provide packet integrity, but this does not encrypt, allowing for network intrusion detection. Encapsulation security payload (ESP) provides packet integrity and confidentiality, but its encryption prevents packet inspection. Consequently, careful planning is required to determine which traffic should be secured.

    One use of IPSec is to allow domain replication to pass through firewalls, creating an IPSec policy on each domain controller to secure traffic to its replication partners. ESP 3DES should be used for encryption and the firewall should be configured to allow UDP port 500 for internet key exchange (IKE) and IP protocol 50 for ESP.

    Potential issues around wireless network security are well publicised. The two most common security configurations each have their own limitations:

    • Wired equivalency privacy (WEP), relies on static WEP keys which are not dynamically changed and are therefore vulnerable to attack. There is no standard method for provisioning static WEP keys to clients and the principle of static keys does not scale well, with a compromised key exposing everyone.
    • MAC address filtering, is also limited by the potential for an attacker to spoof an allowed MAC address.

    Possible solutions include password-based layer 2 authentication (IEEE 802.1x with PEAP/MS CHAP v2) and certificate-based layer 2 authentication (IEEE 802.1x EAP-TLS). Other options include:

    • VPN connectivity using L2TP/IPSec (preferred) or PPTP. This does not allow for roaming but is useful when accessing public wireless hot spots; however there is no computer authentication, or processing of computer group policy settings.
    • IPSec, but this has some interoperability issues.
    Security type Security level Ease of deployment Ease of integration
    Static WEP Low High High
    IEEE 802.1x (PEAP) High Medium High
    IEEE 802.1x (TLS) High Low High
    VPN (L2TP/IPSec) High Medium Low
    IPSec High Low Low

    Summary

    In summary, firewalls are placed in different locations for different reasons. These must be understood and filtered accordingly. Core functionality can be extended with protocol filters to cover a specific scenario but no one device is a silver bullet. Solutions are more important than devices and firewall configuration is more than a networking decision – it also requires application awareness.

    Links

    Microsoft ISA Server
    ISA Server Community
    ISAserver.org
    Zebedee (a simple, secure TCP and UDP tunnel program)

  • 10,000 feet view of Microsoft Exchange Server 2003

    For anyone who is new to Exchange Server 2003, here’s a brief overview that may be of use.

    Microsoft Exchange Server 4.0 was launched in 1996, as a replacement for Microsoft Mail 3.x. It was Microsoft’s first groupware product, competing directly with Novell GroupWise and Lotus Notes. Later versions (v5.0, v5.5) added functionality and improved the scalability of the solution, with Exchange Server 5.5 being the version which really took a hold on the market. Exchange 2000 (v6.0) was a major rewrite, leaving behind its own directory service and using Active Directory instead (which is based on the Exchange Directory technology), and featuring a new system of storage groups supporting multiple mailbox and private stores (for improved database backup and restoration times). Exchange 2000 also switched its internal message transport protocol from X.400 to SMTP, making use of (and extending) the SMTP features in Microsoft Internet Information Services (IIS). Microsoft Exchange Server 2003 (v6.5) builds on the scalability, reliability, performance and manageability of Exchange Server 2000 and is the current version, albeit without conferencing, instant messaging and chat features, which have been replaced by the Microsoft Office Live Communications Server product.

    Exchange Overview

    The Exchange environment is known as an organization. Within an organization, Exchange Server 4.0-5.5 divides the infrastructure into one or more sites (similar to the Active Directory site concept), linked by connectors of varying types (e.g. RPC, X.400). Exchange Server 2000 and 2003 use routing groups instead of sites, and also introduce the concept of administrative groups for organisations where administration is undertaken by different groups of staff (e.g. in a global organisation with separate teams for Americas, EMEA and Asia-Pacific).

    Exchange servers may be configured for a variety of specialist purposes:

    • A bridgehead, is a server dedicated to routing e-mail for a routing group.
    • Front end (protocol) servers act as concentrators for client connections, where protocol conversion would be an overhead (e.g. HTTP connections using Outlook Web Access).
    • Back end (storage) servers could be mailbox servers, public folders servers, or a combination of the two.
    • There may also be other servers dedicated to providing services such as fax connections, mail archival and retrieval (near-line storage), or instant messaging.

    Because Exchange Server 2000 and 2003 use Active Directory, Exchange servers require a global catalog server to be placed nearby. Other products, such as Microsoft ISA Server 2004 may also be used to increase security, for example filtering inbound e-mail, or publishing an Outlook Web Access (OWA) server.

    Links

    Microsoft Exchange Server
    Microsoft Exchange Server resource site