World clock – handy for working out what time it is when contacting people on the other side of the world… (on a related note, if your system clock is wrong and you want to know the local time, try entering the time in Google).
[Ridiculing readers is probably not a good way to increase the popularity of this site but I honestly don’t think this person is a regular reader… none of you would really leave a comment like this… would you?]
Back in 2005, I published a list of useful mobile handset commands and it’s still attracting some interest so, as I bought a new mobile phone this week and it came with a list of the default PINs for each of the UK mobile operators, I’m re-publishing that information here in case it’s useful to someone:
The number of user groups around Microsoft products seems to be increasing steadily and another group has recently started – the Active Directory UK user group (ADUG) – who aim to:
“[…] build a community of Active Directory users, be they experts or beginners, where they will be able to ask questions, share experiences and learn from each other and leading experts in the field. Regular meetings will be held, to discuss and learn about topical issues such as upgrade paths, virtualisation and compliance; these will be in addition to traditional topics such as replication, disaster recovery and administration.”
In this final infrastructure architecture post, I’ll outline the steps involved in building an infrastructure for data centre consolidation. In this example the infrastructure is a large cluster of servers to run a virtual infrastructure; however many of the considerations would be the same for a non-clustered or a physical solution:
The first step is to choose build a balanced system – whilst it’s inevitable that there will be a bottleneck at some point in the architecture, by designing a balanced system the workloads can be mixed to even out the overall demand – at least, that’s the intention. Using commodity hardware it should be possible to provide a balance between cost and performance using the following configuration for the physical cluster nodes:
4-way quad core (16 core in total) Intel Xeon or AMD Opteron-based server (with Intel VT/AMD-V and NX/XD processor support).
2GB RAM per processor core minimum (4GB per core recommended).
4Gbps Fibre Channel storage solution.
Gigabit Ethernet NIC (onboard) for virtual machine management and migration/cluster heartbeat.
Quad-port gigabit Ethernet PCI Express NIC for virtual machine access to the network.
Windows Server 2008 x64 enterprise or datacenter edition (server core installation).
Ensure that Active Directory is available (at least one physical DC is required in order to get the virtualised infrastructure up and running).
Build the physical servers that will provide the virtualisation farm (16 servers).
Configure the SAN storage.
Provision the physical servers using System Center Configuration Manager (again, a physical server will be required until the cluster is operational) – the servers should be configured as a 14 active/2 passive node failover cluster.
Configure System Center Virtual Machine Manager for virtual machine provisioning, including the necessary PowerShell scripts and the virtual machine repository.
Configure System Center Operations Manager (for health monitoring – both physical and virtual).
Configure System Center Data Protection Manager for virtual machine snapshots (i.e. use snapshots for backup).
Replicate snapshots to another site within the SAN infrastructure (i.e. provide location redundancy).
This is a pretty high-level view, but it shows the basic steps in order to create a large failover cluster with the potential to run many virtualised workloads. The basic principles are there and the solution can be scaled down or out as required to meet the needs of a particular organisation.
For a while now, there have been various third party screenshot utilities around for jailbroken iPhones but there is also a built in function in the iPhone 2.0 software (I haven’t tried on earlier versions). Just press the home and sleep keys together, the screen will briefly flash which, and a .PNG file will be created in the camera roll. Then hook the iPhone up to a computer with the supplied USB cable and use your chosen application to download the picture to the computer (just as you would for camera images). (via Alex Coles)
The example image in this post showing the GPS trace for where I am sitting right now… and it’s only about 4 metres north-east from where I really am. For those people who say that GPS on the iPhone is unnecessary I’d point out that it’s a lot better than my v1.1.4 iPhone which thought I lived in a field about a mile and half south-east of here…
If I’m going to use location-aware services, I’d like them to be aware of where I am (rather than where the local farmer’s sheep are). That’s why GPS in an iPhone is worthwhile – regardless of time it takes to lock on (which seems to be a very long time) and the conseqential hit on battery life.
The whole point of high availability is ensuring that there is no single point of failure. In addition to hardware redundancy (RAID on storage, multiple power supplies, redundant NICs, etc.) consideration should be given to operating system or application-level redundancy.
For some applications, redundancy is inherent:
Active Directory uses a multiple-master replicated database.
Exchange Server 2007 offers various replication options (local, clustered or standby continuous replication).
SQL Server 2008 has enhanced database mirroring.
Other applications may be more suited to the provision of redundancy in the infrastructure – either using failover clusters (e.g. for SQL Server 2005, file and print servers, virtualisation hosts, etc.) or with network load balancing (NLB) clusters (e.g. ISA Server, Internet Information Services, Windows SharePoint Services, Office Communications Server, read-only SQL Server, etc.) – in many cases the choice is made by the application vendor as some applications (e.g. ISA Server, SCOM and SCCM) are not cluster-friendly.
Failover clustering (the new name Microsoft cluster services) is greatly improved in Windows Server 2008, with simplified support (no more cluster hardware compatibility list – replaced by a cluster validation tool, although the hardware is still required to be certified for Windows Server 2008), support for more nodes (the maximum is up from 8 to 16), support for multiple-subnet geoclusters and IPv6 as well as new management tools and enhanced security.
In the final post in this series, Iâ€™ll take a look at how to build an infrastructure for data centre consolidation.
When I bought my first iPhone, I bought a rubber case to protect it (and a set of screen protectors). After a few months, the rubber case split, so I bought a polycarbonate case instead. And when I went to sell the phone, I removed it from the case and found that it was still scratched – despite having spent around Â£50 in total on the various protective accessories. With my new iPhone 3G, I decided to try something different and I knew that one of my friends had been pleased with his .
InvisibleSHIELD is a clear protective film that is applied to the device – so it looks just as the original manufacturer intended (albeit in a strange wrapper) rather than in an external case with questionable aesthetics and which may restrict your ability to use your device with certain accessories. Each InvisibleSHIELD is cut to size for a particular device (be it a laptop, phone, GPS, PDA or even a watch). Furthermore, if you need to remove the film (e.g. to sell the device in as new condition – as my friend Alex did with his iPhone), then it easily detaches and leaves no stickiness behind.
I had two InvisibleSHIELDs to install – first up I protected my 30GB iPod with Video (which went very smoothly) and then I tried on my iPhone 3G (which was very difficult) but the best piece of advice I was given was to watch the videos first. It’s not complex – but there is definitely a technique – and I would have paid someone to do my iPhone if I knew they could do it well (unfortunately the curved back of the iPhone makes it very difficult to apply the film to and I have a couple of air bubbles that I missed as I fought to get all the edges and corners stuck down in the right places). I’m now following the manufacturer’s instructions and leaving the devices alone whilst the ShieldSpray application solution dries.
On the whole, I’m pleased with my InvisibleSHIELDs. Of course they are not completely invisible, as with any adhesive film (e.g. there’s some extra glare on the screen on my iPod now) and, as mentioned previously, the iPhone protector was difficult to install but I can use the devices without a case getting in the way (for instance, the iPod no longer needs to be removed from its case to put it into my speaker system, or onto the dock connector in my wife’s car). The feel of the shield also means that there is some slight friction against a desk, or the palm of my hand, making it less likely to slide away but there is one significant flaw in the design – the points on the device that are still exposed after the shield is in place are the corners – i.e. those areas most likely to get scratched up if the device does take a tumble.
Would I buy another InvisibleSHIELD? Almost certainly yes. In fact, if I ever get to the point that my MacBook goes out and about with me more, then I’ll probably buy one to protect it. It’s a low-cost solution with a high value. Even so, I’m a perfectionist and if there was a local distributor who would fit a shield for me (with no air bubbles and all edges perfectly lined up) then I’d pay an extra Â£20 for that service (on an iPhone 3G at least!).
The main security challenges which organisations are facing today include: management of access rights; provisioning and de-provisioning (with various groups of users – internal, partners and external); protecting the network boundaries (as there is a greater level of collaboration between organisations); and controlling access to confidential data.
Most organisations today need some level of integration with partners and the traditional approach has been one of:
NT Trusts (rarely used externally) – not granular enough.
Shadow accounts with matching usernames and passwords – difficult to administer.
Proxy accounts shared by multiple users – with no accountability and a consequential lack of security.
Federated rights management is a key piece of the “cloud computing” model and allows for two organisations to trust one another (cf. an NT trust) but without the associated overheads – and with some granularity. The federated trust is loosely coupled – meaning that there is no need for a direct mapping between users and resources – instead an account federation server exists on one side of the trust and a resource federation server exists on the other.
As information is shared with customers and partners traditional location-based methods of controlling information (firewalls, access control lists and encryption) have become ineffective. Users e-mail documents back and forth, paper copies are created as documents are printed, online data storage has become available and portable data storage devices have become less expensive and more common with increasing capacities. This makes it difficult to set a consistent policy for information management and then to manage and audit access. It’s almost inevitable that there will be some information loss or leakage.
(Digital) rights management is one solution – most people are familiar with DRM on music and video files from the Internet and the same principles may be applied to IT infrastructure. Making use of 128-bit encryption together with policies for access and usage rights, rights management provides persistent protection to control access across the information lifecycle. Policies are embedded within the document (e.g. for the ability to print, view, edit, or forward a document – or even for it’s expiration) and access is only provided to trusted identities. It seems strange to me that we are all so used to the protection of assets with perceived worth to consumers but that commercial and government documentation is so often left unsecured.
Of course, security should be all-pervasive, and this post has just scratched the surface looking at a couple of challenges faces by organisations as the network boundaries are eroded by increased collaboration. In the next post of this series, Iâ€™ll take a look at some of the infrastructure architecture considerations for providing high availability solutions.
Virtualisation is a huge discussion point right now but before rushing into a virtualisation solution it’s important to understand what the business problem is that needs to be solved.
If an organisation is looking to reduce data centre hosting costs through a reduction in the overall heat and power requirements then virtualisation may help – but if they want to run applications that rely on legacy operating system releases (like Windows NT 4.0) then the real problem is one of support – the operating system (and probably the application to) are unsupported, regardless of whether or not you can physically touch the underlying platform!
Even if virtualisation does look like it could be a solution (or part of one), it’s important to consider management – I often come up against a situation whereby, for operational reasons, virtualisation is more expensive because the operations teams see the host machine (even it if it just a hypervisor) as an extra server else that needs to administered. That’s a rather primitive way to look at things, but there is a real issue there – management is the most important consideration in any virtualisation solution.
Microsoft believes that it has a strong set of products when it comes to virtualisation, splitting the technologies out as server, desktop, application and presentation virtualisation, all managed with products under the System Center brand.
Perhaps the area where Microsoft is weakest at the moment (relying on partners like Citrix and Quest to provide a desktop broker service) is desktop virtualisation. Having said that, it’s worth considering the market for a virtualised desktop infrastructure – with notebook PC sales outstripping demand for desktops it could be viewed as a niche market. This is further complicated by the various alternatives to a virtual desktop running on a server somewhere: remote boot of a disk-less PC from a SAN; blade PCs (with an RDP connection from a thin client); or a server-based desktop (e.g. using presentation virtualisation).
Presentation virtualisation is also a niche technology as it failed to oust so called “thick client” technologies from the infrastructure. Even so it’s not uncommon (think of it as a large niche – if that’s not an oxymoron!) and it works particularly well in situations where there is a large volume of data that needs to be accessed in a central database as the remote desktop client is local to the data – rather than to the (possibly remote) user. This separation of the running application from the point of control allows for centralised data storage and a lower cost of management for applications (including session brokering capabilities) and, using new features in Windows Server 2008 (or with third party products on older releases of Windows Server), this may further enhanced with the ability to provide gateways for RPC/HTTPS access (including a brokering capability) (avoiding the need for a full VPN solution) and web access/RemoteApp sessions (terminal server sessions which appear as locally-running applications).
The main problem with presentation virtualisation is incompatibility between applications, or between the desktop operating system and an application (which, for many, is the main barrier to Windows Vista deployment) – that’s where application virtualisation may help. Microsoft Application Virtualization (App-V – formerly known as SoftGrid) attempts to solve this issue of application to application incompatibility as well as aiding application deployment (with no requirement to test for application conflicts). To do this, App-V virtualises the application configuration (removing it from the operating system) and each application runs in it’s own runtime environment with complete isolation. This means that applications can run on clients without being “installed” (so it’s easy to remove unused applications) and allows administration from a central location.
The latest version of App-V is available for a full infrastructure (Microsoft System Center Application Virtualization Management Server), a lightweight infrastructure (Microsoft System Center Application Virtualization Streaming Server) or in MSI-delivered mode (Microsoft System Center Application Virtualization Standalone Mode).
Finally host (or server) virtualisation – the most common form of virtualisation but still deployed on only a fraction of the world’s servers – although there are few organisations that would not virtualise at least a part of their infrastructure, given a green-field scenario.
The main problems which host virtualisation can address are:
Optimising server investments by consolidating roles (driving up utilisation).
Business continuity management – a whole server becomes a few files, making it highly portable (albeit introducing security and management issues to resolve).
Dynamic data centre.
Test and development.
Most 64-bit versions of Windows Server have enterprise-ready virtualisation built in (in the shape of Hyper-V) and competitor solutions are available for a 32-bit environment (although most hardware purchased in recent years is 64-bit capable and has the necessary processor support). Windows NT is not supported on Hyper-V; however – so if there are legacy NT-based systems to virtualise, then Virtual Server 2005 R2 my be a more appropriate technology selection (NT 4.0 is still out of support, but at least it it is a scenario that has been tested by Microsoft).
In the next post in this series, Iâ€™ll take a look at some of the infrastructure architecture considerations relating to for security.