Protecting my devices with an invisible shield

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Scratch-proof your gadgetsWhen I bought my first iPhone, I bought a rubber case to protect it (and a set of screen protectors). After a few months, the rubber case split, so I bought a polycarbonate case instead. And when I went to sell the phone, I removed it from the case and found that it was still scratched – despite having spent around £50 in total on the various protective accessories. With my new iPhone 3G, I decided to try something different and I knew that one of my friends had been pleased with his .

InvisibleSHIELD is a clear protective film that is applied to the device – so it looks just as the original manufacturer intended (albeit in a strange wrapper) rather than in an external case with questionable aesthetics and which may restrict your ability to use your device with certain accessories. Each InvisibleSHIELD is cut to size for a particular device (be it a laptop, phone, GPS, PDA or even a watch). Furthermore, if you need to remove the film (e.g. to sell the device in as new condition – as my friend Alex did with his iPhone), then it easily detaches and leaves no stickiness behind.

I had two InvisibleSHIELDs to install – first up I protected my 30GB iPod with Video (which went very smoothly) and then I tried on my iPhone 3G (which was very difficult) but the best piece of advice I was given was to watch the videos first. It’s not complex – but there is definitely a technique – and I would have paid someone to do my iPhone if I knew they could do it well (unfortunately the curved back of the iPhone makes it very difficult to apply the film to and I have a couple of air bubbles that I missed as I fought to get all the edges and corners stuck down in the right places). I’m now following the manufacturer’s instructions and leaving the devices alone whilst the ShieldSpray application solution dries.

On the whole, I’m pleased with my InvisibleSHIELDs. Of course they are not completely invisible, as with any adhesive film (e.g. there’s some extra glare on the screen on my iPod now) and, as mentioned previously, the iPhone protector was difficult to install but I can use the devices without a case getting in the way (for instance, the iPod no longer needs to be removed from its case to put it into my speaker system, or onto the dock connector in my wife’s car). The feel of the shield also means that there is some slight friction against a desk, or the palm of my hand, making it less likely to slide away but there is one significant flaw in the design – the points on the device that are still exposed after the shield is in place are the corners – i.e. those areas most likely to get scratched up if the device does take a tumble.

Would I buy another InvisibleSHIELD? Almost certainly yes. In fact, if I ever get to the point that my MacBook goes out and about with me more, then I’ll probably buy one to protect it. It’s a low-cost solution with a high value. Even so, I’m a perfectionist and if there was a local distributor who would fit a shield for me (with no air bubbles and all edges perfectly lined up) then I’d pay an extra £20 for that service (on an iPhone 3G at least!).

Scratch proof your iPhone 3G

Microsoft infrastructure architecture considerations: part 5 (security)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the infrastructure architecture considerations relating to security.

The main security challenges which organisations are facing today include: management of access rights; provisioning and de-provisioning (with various groups of users – internal, partners and external); protecting the network boundaries (as there is a greater level of collaboration between organisations); and controlling access to confidential data.

Most organisations today need some level of integration with partners and the traditional approach has been one of:

  • NT Trusts (rarely used externally) – not granular enough.
  • Shadow accounts with matching usernames and passwords – difficult to administer.
  • Proxy accounts shared by multiple users – with no accountability and a consequential lack of security.

Federated rights management is a key piece of the “cloud computing” model and allows for two organisations to trust one another (cf. an NT trust) but without the associated overheads – and with some granularity. The federated trust is loosely coupled – meaning that there is no need for a direct mapping between users and resources – instead an account federation server exists on one side of the trust and a resource federation server exists on the other.

As information is shared with customers and partners traditional location-based methods of controlling information (firewalls, access control lists and encryption) have become ineffective. Users e-mail documents back and forth, paper copies are created as documents are printed, online data storage has become available and portable data storage devices have become less expensive and more common with increasing capacities. This makes it difficult to set a consistent policy for information management and then to manage and audit access. It’s almost inevitable that there will be some information loss or leakage.

(Digital) rights management is one solution – most people are familiar with DRM on music and video files from the Internet and the same principles may be applied to IT infrastructure. Making use of 128-bit encryption together with policies for access and usage rights, rights management provides persistent protection to control access across the information lifecycle. Policies are embedded within the document (e.g. for the ability to print, view, edit, or forward a document – or even for it’s expiration) and access is only provided to trusted identities. It seems strange to me that we are all so used to the protection of assets with perceived worth to consumers but that commercial and government documentation is so often left unsecured.

Of course, security should be all-pervasive, and this post has just scratched the surface looking at a couple of challenges faces by organisations as the network boundaries are eroded by increased collaboration. In the next post of this series, I’ll take a look at some of the infrastructure architecture considerations for providing high availability solutions.

Microsoft infrastructure architecture considerations: part 4 (virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the architectural considerations for using virtualisation technologies.

Virtualisation is a huge discussion point right now but before rushing into a virtualisation solution it’s important to understand what the business problem is that needs to be solved.

If an organisation is looking to reduce data centre hosting costs through a reduction in the overall heat and power requirements then virtualisation may help – but if they want to run applications that rely on legacy operating system releases (like Windows NT 4.0) then the real problem is one of support – the operating system (and probably the application to) are unsupported, regardless of whether or not you can physically touch the underlying platform!

Even if virtualisation does look like it could be a solution (or part of one), it’s important to consider management – I often come up against a situation whereby, for operational reasons, virtualisation is more expensive because the operations teams see the host machine (even it if it just a hypervisor) as an extra server else that needs to administered. That’s a rather primitive way to look at things, but there is a real issue there – management is the most important consideration in any virtualisation solution.

Microsoft believes that it has a strong set of products when it comes to virtualisation, splitting the technologies out as server, desktop, application and presentation virtualisation, all managed with products under the System Center brand.

Microsoft view of virtualisation

Perhaps the area where Microsoft is weakest at the moment (relying on partners like Citrix and Quest to provide a desktop broker service) is desktop virtualisation. Having said that, it’s worth considering the market for a virtualised desktop infrastructure – with notebook PC sales outstripping demand for desktops it could be viewed as a niche market. This is further complicated by the various alternatives to a virtual desktop running on a server somewhere: remote boot of a disk-less PC from a SAN; blade PCs (with an RDP connection from a thin client); or a server-based desktop (e.g. using presentation virtualisation).

Presentation virtualisation is also a niche technology as it failed to oust so called “thick client” technologies from the infrastructure. Even so it’s not uncommon (think of it as a large niche – if that’s not an oxymoron!) and it works particularly well in situations where there is a large volume of data that needs to be accessed in a central database as the remote desktop client is local to the data – rather than to the (possibly remote) user. This separation of the running application from the point of control allows for centralised data storage and a lower cost of management for applications (including session brokering capabilities) and, using new features in Windows Server 2008 (or with third party products on older releases of Windows Server), this may further enhanced with the ability to provide gateways for RPC/HTTPS access (including a brokering capability) (avoiding the need for a full VPN solution) and web access/RemoteApp sessions (terminal server sessions which appear as locally-running applications).

The main problem with presentation virtualisation is incompatibility between applications, or between the desktop operating system and an application (which, for many, is the main barrier to Windows Vista deployment) – that’s where application virtualisation may help. Microsoft Application Virtualization (App-V – formerly known as SoftGrid) attempts to solve this issue of application to application incompatibility as well as aiding application deployment (with no requirement to test for application conflicts). To do this, App-V virtualises the application configuration (removing it from the operating system) and each application runs in it’s own runtime environment with complete isolation. This means that applications can run on clients without being “installed” (so it’s easy to remove unused applications) and allows administration from a central location.

The latest version of App-V is available for a full infrastructure (Microsoft System Center Application Virtualization Management Server), a lightweight infrastructure (Microsoft System Center Application Virtualization Streaming Server) or in MSI-delivered mode (Microsoft System Center Application Virtualization Standalone Mode).

Finally host (or server) virtualisation – the most common form of virtualisation but still deployed on only a fraction of the world’s servers – although there are few organisations that would not virtualise at least a part of their infrastructure, given a green-field scenario.

The main problems which host virtualisation can address are:

  • Optimising server investments by consolidating roles (driving up utilisation).
  • Business continuity management – a whole server becomes a few files, making it highly portable (albeit introducing security and management issues to resolve).
  • Dynamic data centre.
  • Test and development.

Most 64-bit versions of Windows Server have enterprise-ready virtualisation built in (in the shape of Hyper-V) and competitor solutions are available for a 32-bit environment (although most hardware purchased in recent years is 64-bit capable and has the necessary processor support). Windows NT is not supported on Hyper-V; however – so if there are legacy NT-based systems to virtualise, then Virtual Server 2005 R2 my be a more appropriate technology selection (NT 4.0 is still out of support, but at least it it is a scenario that has been tested by Microsoft).

In the next post in this series, I’ll take a look at some of the infrastructure architecture considerations relating to for security.

Photosynth

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Photosynth logoA few months back, I heard something about Photosynth – a new method of modelling scenes using photographic images to build up a 3D representation and yesterday I got the chance to have a look at it myself. At first, I just didn’t get it, but after having seen a few synths, I now think that this is really cool technology with a lot of potential for real world applications.

It’s difficult to describe Photosynth but it’s essentially a collage of two dimensional images used together to create a larger canvas through which it’s possible to navigate in three dimensions (four actually). It began life as a research project on “photo tourism” at the University of Washington, after which Microsoft Research and Windows Live Labs took it on to produce Photosynth, using technology gained with Microsoft’s acquisition of Seadragon Software and the first live version of Photosynth was launched yesterday.

Clearly this is not a straightforward application – it’s taken two years of development with an average team size of 10 people just to bring the original research project to the stage it’s at today – so I’ll quote the Photosynth website for a description of how it works:

“Photosynth is a potent mixture of two independent breakthroughs: the ability to reconstruct the scene or object from a bunch of flat photographs, and the technology to bring that experience to virtually anyone over the Internet.

Using techniques from the field of computer vision, Photosynth examines images for similarities to each other and uses that information to estimate the shape of the subject and the vantage point the photos were taken from. With this information, we recreate the space and use it as a canvas to display and navigate through the photos.

Providing that experience requires viewing a LOT of data though—much more than you generally get at any one time by surfing someone’s photo album on the web. That’s where our Seadragonâ„¢ technology comes in: delivering just the pixels you need, exactly when you need them. It allows you to browse through dozens of 5, 10, or 100(!) mega-pixel photos effortlessly, without fiddling with a bunch of thumbnails and waiting around for everything to load.”

I decided to try Photosynth out for myself and the first thing I found was that I needed to install some software. On my Windows computer it installed a local application to create synths and an ActiveX control to view them. Creating a synth from 18 sample images of my home office desk took just a few minutes (each of the images I supplied was a 6.1 mega-pixel JPG taken on my Nikon D70) and I was also able to provide copyright/Creative Commons licensing information for the images in the synth:

Once it had uploaded to the Photosynth site, I could add a description, view other people’s comments, get the links to e-mail/embed the synth, and provide location information. I have to say that I am truly amazed how well it worked. Navigate around to the webcam above my laptop and see how you can go around it and see the magnet on the board behind!

It’s worth pointing out that I have not read the Photosynth Photography Guide yet – this was just a set of test photos looking at different things on and around the desk. If you view the image in grid view you can see that there are three images it didn’t know what to do with – I suspect that if I had supplied more images around those areas then they could have worked just fine.

You may also notice a lack of the usual office artifacts (family photos) etc. – they were removed before I created the synth, for privacy reasons, at the request of one of my family members.

My desk might is not the best example of this technology, so here’s another synth that is pretty cool:

In this synth, called Climbing Aegialis (by J.P.Peter) you can see a climber making his way up the rock face – not just in three dimensions – but in four. Using the . and , keys it’s possible to navigate through the images according to the order in which they were taken.

Potting Shed is another good example – taken by Rick Szeliski, a member of the team that put this product together:

Hover over the image to see a doughnut-shaped ring called a halo and click this to navigate around the image in 3D. If you use the normal navigation controls (including in/out with the mouse scrollwheel) it is possible to go through the door and enter the potting shed for a look inside!

There are also some tiny pixel-sized pin-pricks visible as you navigate around the image. These are the points that were identified whilst the 3D matching algorithm was running. They can be toggled on an off with the p key and in this example they are so dense in places that the image can actually be made out from just the pixel cloud.

Now that the first release of Photosynth is up and running, the development team will transition from Windows Live Labs into Microsoft’s MSN business unit where they will work on using the technology for real and integrating it with other services – like Virtual Earth, where synths could be displayed to illustrate a particular point on a map. Aside from photo tourism, other potential applications for the technology include real estate, art and science – anywhere where visualising an item in three or four dimensions could be of use.

The current version of Photosynth is available without charge to anyone with a Windows LiveID and the service includes 20GB of space for images. The synths themselves can take up quite a bit of space and, at least in this first version of the software, all synths are uploaded (a broadband Internet connection will be required). It’s also worth noting that all synths are public so photos will be visible to everyone on the Internet.

If you couldn’t see the synths I embedded in this post, then you need to install an ActiveX control (Internet Explorer) or plugin (Firefox). Direct3D support is also required so Photosynth is only available for Windows (XP or later) at the moment but I’m told that a Mac version is on the way – even Microsoft appreciates that many of the people who will be interested in this technology use a Mac. On the hardware side an integrated graphics card is fine but the number of images in a synth will be limited by the amount of available RAM.

Finally, I wanted to write this post yesterday but, following the launch, the Photosynth website went into meltdown – or as Microsoft described it “The Photosynth site is a little overwhelmed just now” – clearly there is a lot of interest in this technology. For more news on the development of Photosynth, check out the Photosynth blog.

At last… my wait for a white 3G iPhone upgrade is nearly over

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Apple iPhone 3G in whiteAfter some initial scepticism, I bought an iPhone on the day it launched in the UK. It was great but Wi-Fi coverage is limited (so is 3G for that matter) and I wanted faster browsing (GPS will also be nice) so I decided to upgrade. In order to fund this, I unlocked my first iPhone and sold it on eBay, 2 days before the new one was launched. As I paid £269 for the phone and sold it for just over £200 after PayPal and eBay charges (not bad for an 8-month-old handset), I was pretty pleased – and that’s more than the upgrade will cost (£159) so you could say I made a small profit (except I also threw some almost-new accessories into the deal, so I guess I’m about even), but for the last couple of months I’ve been paying an iPhone tariff and using an old Nokia handset with only limited data capabilities…

The problem is that I would like to have the white model – and, if you live in the UK, that’s only been available from Apple. Of the three authorised retailers (Apple, O2 and Carphone Warehouse), only O2 and Carphone Warehouse can handle upgrades – so no white iPhones for loyal customers then… only for new business customers via Apple. Until today!!!

This morning I started my bi-weekly Monday and Friday call to my local O2 store to see if they have white iPhones in stock or any idea when they might have (“no”, in both cases) and checking the web for news. Apple customer forum moderators seem to be deleting posts about this issue (as with most things that are not pro-Apple) so Neil Holmes’ White iPhone Blog is a good place to start (there’s a good thread at MacRumors on the subject too) and I found that, from today, Carphone Warehouse (CPW) have white iPhones available for pre-order. Yay!!!

Apple iPhone 3G available in white at Carphone Warehouse

(still no news from O2 though…)

Unfortunately the CPW web site doesn’t mention anything about upgrades, so I called CPW’s 0870 (national rate rip-off) sales number and spoke to someone who said I couldn’t upgrade until they had stock (expected later today). After reading Neil’s experiences, I called back and spoke to another CPW representative, who took my details, contacted O2 for authorisation and called me back to take payment. Frankly I was amazed when he called back – CPW have a very bad reputation for customer service (which I why I checked that my billing will still be through O2) but I have an order reference number and my shipment will be confirmed later today or over the weekend, for delivery on Tuesday.

To CPW’s credit, the guy who dealt with my order (Soi) was helpful, called me back as promised, asked if I wanted accessories (but wasn’t pushy when I declined) and didn’t try to force me to buy insurance (it was offered but there was no pressure when I said no) – so that means he was actually better than the O2 representative that sold me my last iPhone in an O2 store last November.

Once I have that shipment notification I’ll be a very happy boy. Until then I wait with more than just a little trepidation.

[Update: 22 August 2008 @18:17: Just received an e-mail to say that the order has been processed and will be despatched shortly… could it be co-incidence that I phoned CPW within the last half hour to see what was happening with shipment?]

[Update: 26 August 2008 @09:42: It’s here! It even comes in a white box!]

Microsoft infrastructure architecture considerations: part 3 (controlling network access)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post, I’ll look at some of the considerations for controlling access to the network.

Although network access control (NAC) has been around for a few years now, Microsoft’s network access protection (NAP) is new in Windows Server 2008 (previous quarantine controls were limited to VPN connections).

It’s important to understand that NAC/NAP are not security solutions but are concerned with network health – assessing an endpoint and comparing its state with a defined policy, then removing access for non-compliant devices until they have been remediated (i.e. until the policy has been enforced).

The real question as to whether to implement NAC/NAP is whether or not non-compliance represents a business problem.

Assuming that NAP is to be implemented, then there may be different policies required for different groups of users – for example internal staff, contractors and visitors – and each of these might require a different level of enforcement; however, if the the policy is to be applied, enforcement options are:

  • DHCP – easy to implement but also easy to avoid by using a static IP address. It’s also necessary to consider the healthcheck frequency as it relates to the DHCP lease renewal time.
  • VPN – more secure but relies on the Windows Server 2008 RRAS VPN so may require a third party VPN solution to be replaced. In any case, full-VPN access is counter to industry trends as alternative solutions are increasing used.
  • 802.1x – requires a complex design to support all types of network user and not all switches support dynamic VLANs.
  • IPSec – the recommended solution – built into Windows, works with any switch, router or access point, provides strong authentication and (optionally) encryption. In addition, unhealthy clients are truly isolated (i.e. not just placed in a VLAN with other clients to potentially affect or be affected by other machines). The downside is that NAP enforcement with IPSec requires computers to be domain joined (so will not help with visitors or contractors PCs) and is fairly complex from an operational perspective, requiring implementation of the health registration authority (HRA) role and a PKI solution.

In the next post in these series, I’ll take a look at some of the architectural considerations for using virtualisation technologies within the infrastructure.

Microsoft improves support for virtualisation – unless you’re using a VMware product

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Software licensing always seems to be one step behind the technology. In the past, I’ve heard Microsoft comment that to virtualise one of their desktop operating systems (e.g using VMware Virtual Desktop Infrastructure) was in breach of the associated licensing agreements – then they introduced a number of licensing changes – including the Vista Enterprise Centralised Desktop (VECD) – to provide a way forward (at least for those customers with an Enterprise Agreement). Similarly I’ve heard Microsoft employees state that using Thinstall (now owned by VMware and rebranded as ThinApp) to run multiple copies of Internet Explorer is in breach of the EULA (the cynic in me says that I’m sure they would have fewer concerns if the technology involved was App-V). A few years back, even offline virtual machine images needed to be licensed – then Microsoft updated their Windows Server licensing to include virtualisation rights but it was never so clear-cut for applications with complex rules around the reassignment of licenses (e.g. in a disaster recovery failover scenario). Yesterday, Microsoft made another step to bring licencing in line with customer requirements when they waived the previous 90-day reassignment rule for a number of server applications, allowing customers to reassign licenses from one server to another within a server farm as frequently as required (it’s difficult to run a dynamic data centre if the licenses are not portable!).

It’s important to note that Microsoft’s licensing policies are totally agnostic of the virtualisation product in use – but support is an entirely different matter.

Microsoft also updated their support policy for Microsoft software running on a non-Microsoft virtualisation platform (see Microsoft knowledge base article 897615) with an increased number of Microsoft applications supported on Windows Server 2008 Hyper-V, Microsoft Hyper-V Server (not yet a released product) or any third-party validated virtualisation platform – based on the Windows Server Virtualization Validation Programme (SVVP). Other vendors taking part in the SVVP include Cisco, Citrix, Novell, Sun Microsystems and Virtual Iron… but there’s a rather large virtualisation vendor who seems to be missing from the party…

[Update: VMware joined the party… they were just a little late]

Outlook cached mode is not available on a server with Terminal Services enabled

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was putting together a demo environment earlier today and needed to publish a Terminal Services RemoteApp, so I installed Terminal Services (and IIS) on my Windows Server 2008 notebook. Later on, I noticed that Outlook was not working in cached mode and I found that offline store (.OST) files and features that rely on them are disabled when running Outlook on a computer with Terminal Services enabled.

I can see why cached mode on a terminal server would be a little odd (it’s fair enough caching data on a remote client but it’s also resonable to expect that the terminal server would be in the data centre – i.e. close to the Exchange Server) – even so, why totally disable it – surely administrators can be given the choice to enable it if circumstances dictate it to be an appropriate course of action?

Oh well… I’ve since removed the Terminal Services role and Outlook is working well again.

Microsoft infrastructure architecture considerations: part 2 (remote offices)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing from my earlier post which sets the scene for a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, in this post, I’ll look at some of the considerations for remote offices.

Geographically dispersed organisations face a number of challenges in order to support remote offices including: WAN performance/reliability; provisioning new services/applications/servers; management; remote user support; user experience; data security; space; and cost.

One approach that can help with some (not all) of these concerns is placing a domain controller (DC) in each remote location; but this has been problematic until recently because it increases the overall number of servers (it’s not advisable to co-locate other services on a domain controller because administration can’t be delegated to a local administrator on a domain controller and the number of Domain Admins should be kept to a minimum) and it’s a security risk (physical access to the domain controller computer makes a potential hacker’s job so much simpler). For that reason, Microsoft introduced read only domain controllers (RODCs) in Windows Server 2008.

There are still some considerations as to whether this is the appropriate solution though. Benefits include:

  • Administrative role separation.
  • Faster logon times (improved access to data).
  • Isolated corruption area.
  • Improved security.

whilst other considerations and potential impacts include:

  • The need for a schema update.
  • Careful RODC placement.
  • Impact on directory-enabled applications.
  • Possibility of site topology design changes.

Regardless of whether a remote office DC (either using the RODC capabilities or as a full DC) is deployed, then server sprawl (through the introduction of branch office servers for a variety of purposes) can be combatted with the concept of a branch “appliance” – not in the true sense of a piece of dedicated hardware runnings an operating system and application that is heavily customised to meet the needs of a specific service – but by applying appliance principles to server design and running multiple workloads in a manner that allows for self-management and healing.

The first step is to virtualise the workloads. Hyper-V is built into Windows Server 2008 and the licensing model supports virtualisation at no additional cost. Using the server core installation option, the appliance (physical host) management burden is reduced with a smaller attack surface and reduced patching. Multiple workloads may be consolidated onto a single physical host (increasing utilisation and removing end-of-life hardware) but there are some downsides too:

  • There’s an additional server to manage (the parent/host partition) and child/guest partitions will still require management but tools like System Center Virtual Machine Manager (SCVMM) can assist (particularly when combined with other System Center products).
  • A good business continuity plan is required – the branch office “appliance” becomes a single point of failure and it’s important to minimise the impact of this.
  • IT staff skills need to be updated to manage server core and virtualisation technologies.

So, what about the workloads on the branch office “appliance”? First up is the domain controller role (RODC or full DC) and this can be run as a virtual machine or as an additional role on the host. Which is “best” is entirely down to preference – running the DC alongside Hyper-V on the physical hardware means there is one less virtual machine to manage and operate (multiplied by the number of remote sites) but running it in a VM allows the DC to be “sandboxed”. One important consideration is licensing – if Windows Server 2008 standard edition is in use (which includes one virtual operating system environment, rather than enterprise edition’s four, or datacenter edition’s unlimited virtualisation rights) then running the DC on the host saves a license – and there is still some administrative role separation as the DC and virtualisation host will probably be managed centrally, with a local administrator taking some responsibility for the other workloads (such as file services).

That leads on to a common workload – file services. A local file server offers a good user experience but is often difficult to back up and manage. One solution is to implement DFS-R in a hub and spoke arrangement and to keep the backup responsibility data centre. If the remote file server fails, then replication can be used to restore from a central server. Of course, DFS-R is not always idea for replicating large volumes of data; however the DFS arrangement allows users to view local and remote data as though it were physically stored a single location and there have been a number of improvements in Windows Server 2008 DFS-R (cf. Windows Server 2003 R2). In addition, SMB 2.0 is less “chatty” than previous implementations, allowing for performance benefits when using a Windows Vista client with a Windows Server 2008 server.

Using these methods, it should be possible to avoid remote file server backups and remote DCs should not need to be backed up either (Active Directory is a multi-master replicated database so it has an inherent disaster recovery capability). All that’s required is some method of rebuilding a failed physical server – and the options there will depend on the available bandwidth. My personal preference is to use BITS to ensure that the remote server always holds a copy of the latest build image on a separate disk drive and then to use this to rebuild a failed server with the minimum of administrator intervention or WAN traffic.

In the next post in these series, I’ll take a look at some of the considerations for using network access protection to manage devices that are not compliant with the organisation’s security policies.

Microsoft infrastructure architecture considerations: part 1 (introduction)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I highlighted the MCS Talks: Enterprise Architecture series of webcasts that Microsoft is running to share the field experience of Microsoft Consulting Services (MCS) in designing and architecting Microsoft-based infrastructure solutions – and yesterday’s post picked up on a key message about software as a service/software plus services from the infrastructure futures section of session 1: infrastructure architecture.

Over the coming days and weeks, I’ll highlight some of the key messages from the rest of the first session, looking at some of the architectural considerations around:

  • Remote offices.
  • Controlling network access.
  • Virtualisation.
  • Security.
  • High availability.
  • Data centre consolidation.

Whilst much of the information will be from the MCS Talks, I’ll also include some additional information where relevant, but, before diving into the details, it’s worth noting that products rarely solve problems. Sure enough, buying a software tool may fix one problem, but it generally adds to the complexity of the infrastructure and in that way does not get to the root issue. Infrastrcture optimisation (even a self assessment) can help to move IT conversations to a business level as well as allowing the individual tasks that are required to reach meet the overall objectives to be prioritised.

Even though the overall strategy needs to be based on business considerations, there are still architectural considerations to take into account when designing the technical solution and, even though this series of blog posts refers to Microsoft products, there is no reason (architecturally) why alternatives should not be considered.