System Center Operations Manager 2007 R2 is released to manufacturing

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my recent blog post about the R2 wave of Microsoft Virtualisation products, I mentioned the management products too – including System Center Operations Manager (SCOM) 2007 R2 (which, at the time was at release candidate stage). That changes today, as Microsoft will announce that SCOM 2007 R2 has been released to manufacturing.

New features in SCOM 2007 R2 include:

  • “Enhanced application performance and availability across platforms in the datacenter through cross platform monitoring, delivering an integrated experience for discovery and management of systems and their workloads, whether Windows, UNIX or Linux.
  • Enhanced performance management of applications in the datacenter with service level monitoring, delivering the ability to granularly define service level objectives that can be targeted against the different components that comprise an IT service.
  • Increased speed of access to monitoring information and functionality with user interface improvements and simplified management pack authoring. Examples include an enhanced console performance and dramatically improved monitoring scalability (e.g., over 1000 URLs can be validated per agent, allowing scaling to the largest of web-based workloads).”
  • Further details are available on the Microsoft website.

    Microsoft Virtualization: the R2 wave

    This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    The fourth Microsoft Virtualisation User Group (MVUG) meeting took place last night and Microsoft’s Matt McSpirit presented a session on the R2 wave of virtualisation products. I’ve written previously about some of the things to expect in Windows Server 2008 R2 but Matt’s presentation was specifically related to virtualisation and there are some cool things to look forward to.

    Hyper-V in Windows Server 2008 R2

    At last night’s event, Matt asked the UK User Group what they saw as the main limitations in the original Hyper-V release and the four main ones were:

    • USB device support
    • Dynamic memory management (ballooning)
    • Live Migration
    • 1 VM per storage LUN

    Hyper-V R2 does not address all of these (regardless of feedback, the product group is still unconvinced about the need for USB device support… and dynamic memory was pulled from the beta – it’s unclear whether it will make it back in before release) but live migration is in and Windows finally gets a clustered file system in the 2008 R2 release.

    So, starting out with clustering – a few points to note:

    • For the easiest support path, look for cluster solutions on the Windows Server Catalog that have been validated by Microsoft’s Failover Cluster Configuration Program (FCCP).
    • FCCP solutions are recommended by Microsoft but are not strictly required for support – as long as all the components (i.e. server and SAN) are certified for Windows Server 2008 – a failover clustering validation report will still be required though – FCCP provides another level of confidence.
    • When looking at cluster storage, fibre channel (FC) and iSCSI are the dominant SAN technologies. With 10Gbps Ethernet coming onstream, iSCSI looked ready to race ahead and has the advantage of using standard Ethernet hardware (which is why Dell bought EqualLogic and HP bought LeftHand Networks) but then Fibre Channel over Ethernet came onstream, which is potentially even faster (as outlined in a recent RunAs Radio podcast).

    With a failover cluster, Hyper-V has always been able to offer high availability for unplanned outages – just as VMware do with their HA product (although Windows Server 2008 Enterprise or Datacenter Editions were required – Standard Edition does not include failover clustering).

    For planned outages, quick migration offered the ability to pause a virtual machine and move it to another Hyper-V host but there was one significant downside of this. Because Microsoft didn’t have a clustered file system, each storage LUN could only be owned by one cluster node at a time (a “shared nothing” model). If several VMs were on the same LUN, all of them needed to be managed as a group so that they could be paused, the connectivity failed over, and then restarted, which slowed down transfer times and limited flexibility. The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access. Third party clustered file system solutions are available for Windows (e.g. Sanbolic Melio FS) but, as Rakesh Malhotra explains on his blog, these products have their limitations too.

    Windows Server 2008 R2 Hyper-V can now provide Live Migration for planned failovers – so Microsoft finally has an alternative to VMware VMotion (at no additional cost). This is made possible because of the new clustered shared volume (CSV) feature with IO fault tolerance (dynamic IO) overcomes the limitations with the shared nothing model and allows up to 256TB per LUN, running on NTFS with no need for third party products. The VM is still stored on a shared storage volume and at the time of failover, memory is scanned for dirty pages whilst still running on the source cluster node. Using an iterative process of scanning memory for dirty pages and transferring them to the target node, the memory contents are transferred (over a dedicated network link) until there are so few that the last few pages may be sent and control passed to the target node in fraction of a second with no discernible downtime (including ARP table updates to maintain network connectivity).

    Allowing multiple cluster nodes to access a shared LUN is as simple as marking the LUN as a CSV in the Failover Clustering MMC snap-in. Each node has a consistent namespace for LUNS so as many VMs as required my be stored on a CSV as need (although all nodes must use the same letter for the system drive – e.g. C:). Each CSV appears as an NTFS mount point, e.g. C:\ClusterStorage\Volume1
    and even though the volume is only mounted on one node, distributed file access is co-ordinated through another node so that the VM can perform direct IO. Dynamic IO ensures that, if the SAN (or Ethernet) connection fails then IO is re-routed accordingly and if the owning node fails then volume ownership is redirected accordingly. CSV is based on two assumptions (that data read/write requests far outnumber metadata access/modification requests; and that concurrent multi-node cached access to files is not needed for files such as VHDs) and is optimised for Hyper-V.

    At a technical level, CSVs:

    • Are implemented as a file system mini-filter driver, pinning files to prevent block allocation movement and tracking the logical-to-physical mapping information on a per-file basis, using this to perform direct reads/writes.
    • Enable all nodes to perform high performance direct reads/writes to all clustered storage and read/write IO performance to a volume is the same from any node.
    • Use SMB v2 connections for all namespace and file metadata operations (e.g. to create, open, delete or extend a file).
    • Need:
      • No special hardware requirements.
      • No special application requirements.
      • No file type restrictions.
      • No directory structure or depth limitations.
      • No special agents or additional installations.
      • No proprietary file system (using the well established NTFS).

    Live migration and clustered storage are major improvements but other new features for Hyper-V R2 include:

    • 32 logical processor (core) support, up from 16 at RTM and 24 with a hotfix (to support 6-core CPUs) so that Hyper-V will now support up to 4 8-core CPUs (and I would expect this to be increased as multi-core CPUs continue to develop).
    • Core parking to allow more intelligent use of processor cores – putting them into a low power suspend state if the workload allows (configurable via group policy).
    • The ability to hot add/remove storage so that additional VHDs or pass through disks may be assigned to to running VMs if the guest OS supports supports the Hyper-V SCSI controller (which should cover most recent operating systems but not Windows XP 32-bit or 2000).
    • Second Level Address Translation (SLAT) to make use of new virtualisation technologies from Intel (Intel VT extended page tables) and AMD (AMD-V nested paging) – more details on these technologies can be found in Johan De Gelas’s hardware virtualisation article at AnandTech.
    • Boot from VHD – allowing virtual hard disks to be deployed to virtual or or physical machines.
    • Network improvements (jumbo frames to allow larger Ethernet frames and TCP offload for on-NIC TCP/IP processing).

    Hyper-V Server

    So that’s covered the Hyper-V role in Windows Server 2008 R2 but what about its baby brother – Hyper-V Server 2008 R2? The good news is that Hyper-V Server 2008 R2 will have the same capabilities as Hyper-V in Windows Server 2008 R2 Enterprise Edition (previously it was based on Standard Edition) to allow access to up to 1TB of memory, 32 logical cores, hot addition/removal of storage, and failover clustering (with clustered shared volumes and live migration). It’s also free, and requires no dedicated management product although it does need to be managed using the RSAT tools for Windows Server 2008 R2 of Windows 7 (Microsoft’s advice is never to manage an uplevel operating system from a downlevel client).

    With all that for free, why would you buy Windows Server 2008 R2 as a virtualisation host? The answer is that Hyper-V Server does not include licenses for guest operating systems as Windows Server 2008 Standard, Enterprise and Datacenter Editions do; it is intended for running non-Windows workloads in a heterogeneous datacentre standardised on Microsoft virtualisation technologies.

    Management

    The final piece of the puzzle is management:

    There are a couple of caveats to note: the SCVMM 2008 R2 features mentioned are in the beta – more can be expected at final release; and, based on previous experience when Hyper-V RTMed, there may be some incompatibilities between the beta of SCVMM and the release candidate of Windows Server Hyper-V R2 (expected to ship soon).

    SCVMM 2008 R2 is not a free upgrade – but most customers will have purchased it as part of the Server Management Suite Enterprise (SMSE) and so will benefit from the two years of software assurance included within the SMSE pricing model.

    Wrap-up

    That’s about it for the R2 wave of Microsoft Virtualization – for the datacentre at least – but there’s a lot of improvements in the upcoming release. Sure, there are things that are missing (memory ballooning may not a good idea for server consolidation but it will be needed for any kind of scalability with VDI – and using RDP as a workaround for USB device support doesn’t always cut it) and I’m sure there will be a lot of noise about how VMware can do more with vSphere but, as I’ve said previously, VMware costs more too – and I’d rather have most of the functionality at a much lower price point (unless one or more of those extra features will make a significant difference to the business case). Of course there are other factors too – like maturity in the market – but Hyper-V is not far off its first anniversary and, other than a couple of networking issues on guests (which were fixed) I’ve not heard anyone complaining about it.

    I’ll write more about Windows 7 and Windows Server 2008 R2 virtualisation options (i.e. client and server) as soon as I can but, based on a page which briefly appeared on the Microsoft website, the release candidate for is expected to ship next month and, after reading Paul Thurrott’s post about a forthcoming Windows 7 announcement, I have a theory (and that’s all it is right now) as to what a couple of the Windows 7 surprises may be…

    Microsoft Virtualization: part 6 (management)

    This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    Today’s release of System Center Virtual Machine Manager 2008 is a perfect opportunity to continue my series of blog posts on Microsoft Virtualization technologies by highlighting the management components.

    Microsoft view of virtualisation

    System Center is at the heart of the Microsoft Virtualization portfolio and this is where Microsoft’s strength lies as management is absolutely critical to successful implementation of virtualisation technologies. Arguably, no other virtualisation vendor has such a complete management portfolio for all the different forms of virtualisation (although competitors may have additional products in certain niche areas) – and no-one else that I’m aware of is able to manage physical and virtual systems in the same tools and in the same view:

    • First up, is System Center Configuration Manager (SCCM) 2007, providing patch management and deployment; operating system and application configuration management; and software upgrades.
    • System Center Virtual Machine Manager (SCVMM) provides virtual machine management and server consolidation and resource utilisation optimisation, as well as providing the ability for physical to virtual (P2V) and limited virtual to virtual (V2V) conversion (predictably, from VMware to Microsoft, but not back again).
    • System Center Operations Manager (SCOM) 2007 (due for a second release in the first quarter of 2009) provides the end-to-end service management; server and application health monitoring and management (regardless of whether the server is physical or virtual); and performance monitoring and analysis.
    • System Center Data Protection Manager (SCDPM) completes the picture, providing live host virtual machine backup with in-guest consistency and rapid recovery (basically, quiescing VMs, before taking a snapshot and restarting the VM whilst backup continues – in a similar manner to VMware Consolidated Backup but also with the ability to act as a traditional backup solution).

    But hang on – isn’t that four products to license? Yes, but there are ways to do this in a very cost-effective manner – albeit requiring some knowledge of Microsoft’s licensing policies which can be very confusing at times, so I’ll have a go at explaining things…

    From the client management license perspective, SCCM is part of the core CAL suite that is available to volume license customers (i.e. most enterprises who are looking at Microsoft Virtualization). In addition, the Enterprise CAL suite includes SCOM (and many other products).

    Looking at server management and quoting a post I wrote a few months ago licensing System Center products:

    The most cost-effective way to license multiple System Center products is generally through the purchase of a System Center server management suite licence:

    Unlike SCVMM 2007 (which was only available as part of the SMSE), SCVMM 2008 is available as a standalone product but it should be noted that, based on Microsoft’s example pricing, SCVMM 2008 (at $1304) is only marginally less expensive than the cost of the SMSE (at $1497) – both quoted prices include two years of software assurance and, for reference, the lowest price for VMware Virtual Center Management Server (VCMS) on the VMware website this morning is $6044. Whilst it should be noted that the VCMS price is not a direct comparison as it includes 1 year of Gold 12×5 support, it is considerably more expensive and has lower functionality.

    It should be noted that the SMSE is virtualisation-technology-agnostic and grants unlimited virtualisation rights. By assigning an SMSE to the physical server, it can be:

    • Patched/updated (SCCM).
    • Monitored (SCOM).
    • Backed Up (SCDPM).
    • VMM host (SCVMM).
    • VMM server (SCVMM).

    One of the advantages of using SCVMM and SCOM together is the performance and resource optimisation (PRO) functionality. Stefan Stranger has a good example of PRO in a blog post from earlier this year – basically SCVMM uses the management pack framework in SCOM to detect issues with the underlying infrastructure and suggest appropriate actions for an administrator to take – for example moving a virtual machine workload to another physical host, as demonstrated by Dell integrating SCVMM with their hardware management tools at the Microsoft Management Summit earlier this year).

    I’ll end this post with a table which shows the relative feature sets of VMware Virtual Infrastructure Enterprise and the Windows Server 2008 Hyper-V/Server Management Suite Enterprise combination:

    VMware Virtual Infrastructure Enterprise Microsoft Windows Server 2008/Server Management Suite Enterprise
    Bare-metal Hypervisor ESX/ESXi Hyper-V
    Centralised VM management Virtual Center SCVMM
    Manage ESX/ESXi and Hyper-V SCVMM
    VM Backup VCB SCDPM
    High Availability/Failover Virtual Center Windows Server Clustering
    VM Migration VMotion Quick Migration
    Offline VM Patching Update Manager VMM (with Offline Virtual Machine Servicing Tool)
    Guest Operating System patching/configuration management SCCM
    End-to-end operating system monitoring SCOM
    Intelligent placement DRS SCVMM
    Integrated physical and virtual management SMSE

    This table is based on one from Microsoft and, in fairness, there are a few features that VMware would cite that Microsoft doesn’t yet have (memory management and live migration are the usual ones). It’s true to say that VMware is also making acquisitions and developing products for additional virtualisation scenarios (and has a new version of Virtual Infrastructure on the way – VI4) but the features and functionality in this table are the ones that the majority of organisations will look for today. VMware has some great products (read my post from the recent VMware Virtualization Forum) – but if I was an IT Manager looking to virtualise my infrastructure, then I’d be thinking hard about whether I really should be spending all that money on the VMware solution, when I could use the same hardware with less expensive software from Microsoft – and manage my virtual estate using the same tools (and processes) that I use for the physical infrastructure (reducing the overall management cost). VMware may have maturity on their side but, when push comes to shove, the total cost of ownership is going to be a major consideration in any technology selection.

    Heterogeneous datacentre management from Microsoft System Center

    This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    Back in 2005, I quoted a Microsoft executive on his view on Microsoft’s support for heterogeneous environments through its management products:

    “[it’s] not part of our DNA and I don’t think this is something that we should be doing.”

    Well, maybe things are changing in post-Gates Microsoft. I knew that System Center Virtual Machine Manager 2008 (named at last week’s Microsoft Management Summit) included support for managing VMware ESX Server and a future version should also be able to manage XenSource hosts, but what I had missed in the MMS press release was System Center Operations Manager (SCOM) 2007 Cross Platform Extensions. These allow SCOM to manage HP-UX, Red Hat Enterprise Linux (RHEL), Sun Solaris and SUSE Linux Enterprise Server through management packs with Novell, Quest and Xandros adding support for common applications like Apache, MySQL and (a real surprise) Oracle. Then, for those with existing investments in major enterprise management suites, there are SCOM connectors to allow interoperability between System Center and third-party products like HP OpenView and IBM Tivoli.

    I really think this is a brave step for Microsoft – but also the right thing to do. There are very few Microsoft-only datacentres and, whilst I am no enterprise management expert, it seems to me that corporates don’t want one solution for each platform and the big enterprise management suites are costly to implement. With System Center, people know what they are getting – a reasonably priced suite of products, with a familiar interface and a good level of functionality – maybe not everything that’s in Tivoli, UniCenter or OpenView, but enough to do the job. If the same solution that manages the WIntel systems can also manage the enterprise apps on Solaris (or another common Unix platform), then everyone’s a winner.

    Looking forward to Windows Server Virtualization

    This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    Okay, I’m English, so I spell virtualisation with an “s” but Windows Server Virtualization is a product name, so I guess I’m going to have to get used to the “z” in the title of this post…

    Over the last year-or-so, much of my work has been concerned with server virtualisation technologies from both Microsoft and VMware (I haven’t really looked at the SWsoft Virtuozzo or Parallels products yet, although I am interested in some of the desktop integration that the latest version of Parallels Desktop for Mac offers). The majority of my efforts have been focused on consolidating server workloads to increase operational efficiency (hence the lack of focus on desktop products) and even though Microsoft Virtual Server 2005 R2 is a very capable product, it is severely constrained by two main factors – a hosted architecture and a lack of management products – consequently I find myself recommending VMware Virtual Infrastructure because Microsoft doesn’t have a product that can really compete in the enterprise space.

    Yet.

    A couple of years back, I wrote about Microsoft’s intention to move from hosted virtualisation to a hypervisor-based architecture (in VMware product terms, this can be compared to the differences between VMware Server and VMware ESX Server) and Windows Server Virtualization (codenamed Viridian) is the result.

    Last week, I was alerted (by more than one Microsoft contact) to the presence of a video in which Jeff Woolsey – Lead Programme Manager Windows Server Virtualisation team – demonstrates Windows Server Virtualization and System Center Virtual Machine Manager and, if it does everything that it promises, then I see no reason not to use the Microsoft platform in place of VMware ESX Server and Virtual Center for the majority of enterprise clients.

    I’ve never doubted Microsoft’s ability (given sufficient time) to grab a huge slice of the x86 server virtualisation market and later this year we should see a new version of Windows Server (codenamed Longhorn) arrive along with Windows Server Virtualization. Soon after that, unless VMware produce something pretty fantastic, I predict that we’ll start to see Microsoft increasing its dominance in the enterprise server virtualisation market.

    In the video I mentioned above, Jeff demonstrates that Windows Server Virtualisation runs as a role on Windows Server Core (i.e. a lightweight version of the Windows Server operating system using fewer system resources), allowing for an increase in the number of running virtual machines. Because Windows Server Core uses a command line interface for local administration, most access will be achieved using remote management tools (VMware ESX Server users – does this sound familiar?). Microsoft are keen to point out that they can support an eight-core virtual machine, which they consider will be more than enough to cover the vast majority of enterprise-class workloads; however I imagine that VMware would release a patch to allow this on ESX Server should it become necessary (they already support 4-core virtual SMP).

    Continuing to look at what Windows Server Virtualization will offer, according to Microsoft UK’s James O’Neill:

    • There will be no support for parallel ports and physical floppy disks – floppy disk images will be supported.
    • The remote management protocol will change from VMRC to RDP.
    • The virtualization layer will provide the RDP support (rather than the guest operating system) so there should be no more of a problem getting to the machine’s BIOS or accessing guest operating systems that don’t support RDP than there is today with VMRC.
    • The web console interface has been replaced with an MMC interface.
    • It will not be a chargeable product (as for Virtual Server 2005 R2 and Virtual PC 2004/2007); however what James doesn’t point out (and that I think is likely) is that the management products (see below) will have a cost attached.
    • Windows Server Virtualization will require 64-bit processors (in common with most of the Longhorn Server wave of products).
    • It will support 64-bit guests.
    • It won’t be back-ported to Server 2003 (even 64-bit).
    • It will support today’s .VHD images.

    What I have not yet managed to ascertain is whether or not Windows Server Virtualization will allow the overcommitment of resources (as VMware ESX Server does today).

    From a management perspective, Microsoft is planning to release a new product – System Center Virtual Machine Manager (VMM) to manage workloads or both physical and virtual resources including a centralised console and new functionality for P2V and live migrations. VMM will organise workload by owner, operating system or user-defined host group (e.g. development, staging and production) as well as providing direct console access to running virtual machines (very like VMware Virtual Center). For the other side of management – that of monitoring the health and performance of physical and virtual workloads – there will be System Center Operations Manager 2007 (a replacement for MOM).

    In my experience of implementing virtualisation in an enterprise environment it’s not the technology that presents the biggest issues – it’s the operational paradigm shift that is required to make the most of that technology. Overcoming that hurdle requires a strong management solution, and that’s where Microsoft has been putting a lot of work in recent years with the System Center range of products.

    Until now, it’s the management of Virtual Server that has been the product’s Achilles’ heel – the combination of VMM and Operations Manager will provide a complete solution for both physical and virtual workloads – and that is potentially Microsoft’s unique selling point – competing products from VMware require learning a new set of tools for managing just the virtual infrastructure, whereas Microsoft is trying to make it easy for organisations to leverage their existing investment in Windows Server administration.

    Quoting Mike Neil, Microsoft GM for Virtualisation Strategy in a recent post on where [Microsoft’s] headed with [virtualisation] (via John Howard):

    “We want to make Windows the most manageable virtualization platform by enabling customers to manage both physical and virtual environments using the same tools, knowledge and skills”

    They may just pull it off – Windows Server Virtualization plus Virtual Machine Manager and Operations Manager may not be as all-encompassing as VMware Virtual Infrastructure but it will come close and it’s probably all that many organisations are ready for at the moment.

    Removing MOM’s Active Directory management pack helper object

    This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    A few months back I had a look at Microsoft Operations Manager (MOM) 2005. Then, a couple of weeks back, I noticed that one of my servers had the Microsoft Operations Manager 2005 Agent installed, as well as the Active Directory management pack helper object. I uninstalled the Microsoft Operations Manager 2005 agent from the Add/Remove programs applet in Control Panel, but when I went to remove the helper object I was greeted with the following error (and the MSI Installer logged event ID 11920 in the application log):

    Active Directory Management Pack Helper Object
    Service ‘MOM’ (MOM) failed to start. Verify that you have sufficient privileges to start system services.

    Retrying the operation produced the same error, so I was forced to cancel, then confirm the cancellation, before finally receiving another error message (and the MSI Installer logged event ID 11725 in the application log):

    Add or Remove Programs
    Fatal error during installation.

    The answer was found on the microsoft.public.mom newsgroup – I needed to reinstall the MOM agent before the AD management pack helper object could be removed but there was a slight complication because I no longer have a MOM server (I deleted my virtual MOM server after finishing my testing). Manual agent installation is possible, but I needed to supply false details for the management group name and management server in order to let the installation take place with a warning that the agent would keep retrying to contact the server (all other settings were left at their defaults).

    Once the agent installation was complete, it was a straightforward operation to remove the Active Directory management pack helper object, before uninstalling the MOM agent (successfully indicated by MSI Installer event ID 11724 in the application log).

    It’s a simple enough workaround but represents lousy design on the part of the MOM agent/management pack installers – surely any child helper object installations should be identified before a parent agent will allow itself to be uninstalled?

    Microsoft Operations Manager 2005 architecture

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    Yesterday, Microsoft UK’s Eileen Brown showed a MOM architecture slide as part of a presentation she was giving on monitoring Exchange Server. I found it a good introduction to the way that MOM 2005 works so I thought I’d reproduce it here:

    MOM 2005 architecture

    Microsoft’s common engineering criteria and Windows Server product roadmap

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    I’ve often heard people at Microsoft talk about the common engineering criteria – usually when stating that one of the criteria is that a management pack for Microsoft Operations Manager (MOM) 2005 must be produced with each new server product (at release). A few minutes back, I stumbled across Microsoft’s pages which describe the full common engineering criteria and the associated report on common engineering criteria progress.

    Also worth a read is the Windows Server product roadmap and the Windows service pack roadmap.

    Finally, for an opportunity to provide feedback on Windows Server and to suggest new features, there is the Windows Server feedback page (although there’s no guarantee that a suggestion will be taken forward).

    IT Forum ’05 highlights: part 1

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    Microsoft UK IT Forum Highlights
    A few years back, I used to try and persuade my employer to send me to Microsoft TechEd Europe each year, on the basis that lots of 75 minute presentations on a variety of topics provided a better background for me than a few days of in depth product training (I can build experience later as I actually use the technology). The last time I attended TechEd was back in 2001, by which time it had become more developer-focused and the IT Forum was being positioned as the infrastructure conference (replacing the Microsoft Exchange Conference). For the last couple of years, I haven’t been able to attend the IT Forum due to family commitments (first it clashed with my the birth of my son and then subsequently its been in conflict with his birthday, as it is again this year) but luckily, Microsoft UK has been re-presenting the highlights from IT Forum as free-of-charge TechNet events (spread over two days) and I’ve managed to take some time out to attend them.

    Yesterday’s event covered a variety of topics. Unfortunately there was no concept of different tracks from which I could attend the most relevant/interesting sessions, so some it went completely over my head. One of those topics was upgrading to SQL Server 2005, so apologies to the presenter – I was the guy nodding off on the front row.

    In the next few paragraphs, I’ll highlight some of the key points from the day.

    Upgrading to SQL Server 2005
    Presented by Tony Rogerson, SQL Server MVP and UK SQL Server Community leader, this session gave useful information for those looking at upgrading from SQL Server 2000 (or earlier) to SQL Server 2005. I’ve blogged previously with a SQL Server 2005 overview, why SQL Server 2005 is such a significant new product and on the new management tools but the key points from Tony’s presentation were:

    • Upgrades (in-place upgrades) are supported, preserving user data and maintaining instance names in a largely automated fashion, as are side-by-side migrations (mostly manual, copying data from an old installation to a new and then decommissioning the old servers).
    • SQL Server versions prior to 7.0 cannot be migrated directly and SQL Server 7.0/2000 need to be updated to the latest service pack levels before they can be migrated. For SQL Server 2000 that is SP4, which might break some functionality for SP3A users, so the upgrade needs to be carefully planned.
    • The database engine (including subcomponents like the SQL Agent, tools, etc.), analysis services, reporting services and notification services can all be upgraded, and data transformation services can be migrated to integration services.
    • All product editions can be upgraded/migrated (32/64-bit, desktop, workgroup, personal, standard, developer or enterprise editions), as can all SQL Server 7.0/2000 released languages.
    • A smooth upgrade requires a good plan, breaking tasks into:
      • Pre-upgrade tasks.
      • Upgrade execution tasks.
      • Post-upgrade tasks (day 0, day 30, day 90).
      • Backout plan.
    • Microsoft provides the SQL Server 2005 Upgrade Advisor as a free download to analyse instances of SQL Server 7.0 and SQL Server 2000 in preparation for upgrading to SQL Server 2005. This can be used repeatedly until all likely issues have been resolved and the upgrade can go ahead.
    • Migration provides for more granular control over the process that an upgrade would and the presence of old and new installations side-by-side can aid with testing and verification; however it does require new hardware (although a major investment in a SQL Server upgrade would probably benefit from new hardware anyway) and applications will need to be directed to the new instance. Because the legacy installation remains online, there is complete flexibility to fail back should things not go to plan.
    • Upgrades will be easier and faster for small systems and require no new hardware or application reconfiguration; however the database instances will remain offline during the upgrade and it’s not best practice to upgrade all components (e.g. analysis services cubes).
    • Upgrade tips and best practices include:
      • Reduce downtime by pre-installing setup pre-requisites (Microsoft .NET Framework 2.0, SQL Native Client and setup support files) – some of these are needed for the Upgrade Advisor anyway.
      • If planning a migration using the copy database wizard, place the database in single-user mode (to stop users from modifying the data during the upgrade) and make sure that no applications or services are trying to access the database. Also, do not use read-only mode (this will result in an error) and note that the database cannot be renamed during the operation.
      • Be aware of the reduced surface attack area of SQL Server 2005 – some services and features are disabled for new installations (secure by default) – the surface area configuration tools can be used to enable or disable features and services.

    Leveraging your Active Directory for perimeter defence
    Presented by Richard Warren, an Internet and security training specialist, I was slightly disappointed with this session, which failed to live up to the promises that its title suggested. After spending way too much time labouring Microsoft’s usual points about a) how packet filtering alone is not enough and ISA Server adds application layer filtering and b) ISA Server 2004 is much better and much easier to use than ISA Server 2000, Richard finally got down to some detail about how to use existing investments in AD and ISA Server to improve security (but I would have liked to have seen more real-world examples of exactly how to implement best practice). Having been quite harsh about the content, I should add that there were some interesting points in his presentation:

    • According to CERT, 95% of [computer security] breaches [were] avoidable with an alternative configuration.
    • According to Gartner Group, approximately 70% of all web attacks occur at the application layer.
    • Very few organisations are likely to deploy ISA Server as a first line of defence. Even though ISA Server 2004 is an extremely secure firewall, it is more common to position a normal layer 3 (packer filtering) firewall at the network edge and then use ISA Server behind this to provide application layer filtering on the remaining traffic.
    • Users who are frightened of IT don’t cause many problems. Users who think they understand computers cause most of the problems. Users who do know what they are doing are few and far between. (Users are a necessary evil for administrators).
    • Not all attacks are malicious and internal users must not be assumed to be “safe”.
    • ISA Server can be configured to write it’s logs to SQL Server for analysis.
    • Active Directory was designed for distributed security (domain logon/authentication and granting access to resources/authorisation) but it can also store and protect identities and plays a key role in Windows managability (facilitating the management of network resources, the delegation of network security and enabling centralised policy control).
    • Using ISA Server to control access to sites (both internal and external), allows monitoring and logging of access by username. If you give users a choice of authenticated access or none at all, they’ll choose authenticated access. If transparent authentication is used with Active Directory credentials, users will never know that they needed a username and password to access a site (this requires the ISA Server to be a member of the domain or a trusted domain, such as a domain which only exists within the DMZ).
    • ISA Server’s firewall engine performs packet filtering and operates in kernel mode. The firewall service performs application layer filtering (extensible via published APIs) and operates in user mode.
    • SSL tunnelling provides a secure tunnel from a client to a server. SSL bridging involves installing the web server’s certificate on the ISA Server, terminating the client connection there and letting ISA server inspect the traffic and handle the ongoing request (e.g. with another SSL connection, or possibly using IPSec). Protocol bridging is similar, but involves ISA server accepting a connection using one protocol (e.g. HTTP) before connecting to the target server with another protocol (e.g. FTP).

    Microsoft Windows Server 2003 Release 2 (R2) technical overview
    Presented by Quality Training (Scotland)‘s Andy Malone, this session was another disappointment. Admittedly, a few months back, I was lucky to be present at an all day R2 event, again hosted by Microsoft, but presented by John Craddock and Sally Storey of Kimberry Associates, who went into this in far more detail. Whilst Andy only had around an hour (and was at pains to point out that there was lots more to tell than he had time for), the presentation looked like Microsoft’s standard R2 marketing deck, with some simple demonstrations, poorly executed, and it seemed to me that (like many of the Microsoft Certified Trainers that I’ve met) the presenter had only a passing knowledge of the subject – enough to present, but lacking real world experience.

    Key points were:

    • Windows Server 2003 R2 is a release update – approximately half way between Windows Server 2003 and the next Windows Server product (codenamed Longhorn).
    • In common with other recent Windows Server System releases, R2 is optimised for 64-bit platforms.
    • R2 is available in standard, enterprise and datacenter editions (no web edition) consisting of two CDs – the first containing Windows Server 2003 slipstreamed with SP1 and the second holding the additional R2 components. These components are focused around improvements in branch office scenarios, identity management and storage.
    • The new DFSR functionality can provide up to 50% WAN traffic reduction through improved DFS replication (using bandwidth throttling remote differential compression, whereby only file changes are replicated), allowing centralised data copies to be maintained (avoiding the need for local backups, although one has to wonder how restoration might work over low-speed, high latency WAN links). Management is improved with a new MMC 3.0 DFS Management console.
    • There is a 5MB limit on the size of the DFS namespace file, which equates to approximately 5000 folders for a domain namespace and 50,000 folders for a standalone namespace. Further details can be found in Microsoft’s DFS FAQ.
    • Print management is also improved with a new MMC 3.0 Print Management console, which will auto-discover printers on a subnet and also allows deployment of printer connections using group policy (this requires use a utility called pushprinterconnections.exe within a login script, as well as a schema update).
    • Identity and access management is improved with Active Directory federation services (ADFS), Active Directory application mode (ADAM – previously a separate download), WS-Management and Linux/Unix identity management (incorporating Services for Unix, which was previously a separate download).
    • For many organisations, storage management is a major problem with typical storage requirements estimated to be increasing by between 60% and 100% each year. The cost of managing this storage can be 10 times the cost of the disk hardware and Microsoft has improved the storage management functionality within Windows to try and ease the burden.
    • The file server resource manager (FSRM) is a new component to integrate capacity management, policy management and quota management, with quotas now set at folder level (rather than volume) and file screening to avoid storage of certain file types on the server (although the error message if a user tries to do this just warns of a permissions issue and is more likely to confuse users and increase the burden on administrators trying to resolve any resulting issues).
    • Storage manager for SANs allows Windows administrators to manage disk resources on a SAN (although not with the granularity that the SAN administrator would expect to have – I’ve not seen this demonstrated but believe it’s only down to a logical disk level).
    • In conclusion, Windows Server 2003 R2 builds on Windows Server 2003 with new functionality, but with no major changes so as to ensure a non-disruptive upgrade with complete application compatibility, and requiring no new client access licenses (CALs).

    Management pack melee: understanding MOM 2005 management packs
    Finally, a fired up, knowledgeable presenter! Gordon McKenna, MOM MVP is clearly passionate about his subject and blasted through a whole load of detail on how Microsoft Operations Manager (MOM) uses management packs to monitor pretty much anything in a Windows environment (and even on other platforms, using third-party management packs). There was way too much information in his presentation to represent here, but Microsoft’s MOM 2005 for beginners website has loads of information including technical walkthoughs. Gordon did provide some additional information though which is unlikely to appear on a Microsoft website (as well as some that does):

    • MOM v3 is due for release towards the end of this year (I’ve blogged previously about some of the new functionality we might see in the next version of MOM). It will include a lightweight agent, making MOM more suitable for monitoring client computers as well as a Microsoft Office management pack. MOM v3 will also move from a server-centric paradigm to a service-centric health model in support of the dynamic systems initiative and will involve a complete re-write (if you’re going to buy MOM this year, make sure you also purchase software assurance).
    • There are a number of third-party management packs available for managing heterogeneous environments. The MOM management pack catalogue includes details.
    • The operations console notifier is a MOM 2005 resource kit utility which provides pop-up notification of new alerts (in a similar manner to Outlook 2003’s new mail notification).

    A technical overview of Microsoft Virtual Server 2005
    In the last session of the day, Microsoft UK’s James O’Neill presented a technical overview of Microsoft Virtual Server 2005. James is another knowledgeable presenter, but the presentation was a updated version of a session that John Howard ran a few months back. That didn’t stop it from being worthwhile – I’m glad I stayed to watch it as it included some useful new information:

    • Windows Server 2003 R2 Enterprise Edition changes the licensing model for virtual servers in two ways: firstly, by including 4 guest licenses with every server host licence (total 5 copies of R2); secondly by only requiring organisations to be licensed for the number of running virtual machines (currently even stored virtual machine images which are not in regular use each require a Windows licence); finally, in a move which is more of a clarification, server products which are normally licensed per-processor (e.g. SQL Server, BizTalk Server, ISA Server) are only required to be licensed per virtual processor (as Virtual Server does not yet support SMP within the virtual environment).
    • The Datacenter edition of the next Windows Server version (codenamed Longhorn) will allow unlimited virtual guests to be run as part of its licence – effectively mainframe Windows.
    • Microsoft is licensing (or plans to licence) the virtual hard disk format, potentially allowing third parties to develop tools that allow .VHD files to be mounted as drives within Windows. There is a utility to do this currently, but it’s a Microsoft-internal tool (I’m hoping that it will be released soon in a resource kit).
    • As I reported previously, Microsoft is still planning a service pack for Virtual Server 2005 R2 which will go into beta this quarter and to ship in the Autumn of 2006, offering support for Intel virtualization technology (formerly codenamed Vanderpool) and equivalent technology from AMD (codenamed Pacifica) as well as performance improvements for non-Windows guest operating systems.

    Overall, I was a little disappointed with yesterday’s event, although part 2 (scheduled for next week) looks to be more relevant to me with sessions on Exchange 12, the Windows Server 2003 security configuration wizard, Monad, Exchange Server 2003 mobility and a Windows Vista overview. Microsoft’s TechNet UK events are normally pretty good – maybe they are just a bit stretched for presenters right now. Let’s just hope that part 2 is better than part 1.

    Starting to look at Microsoft Operations Manager

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    Earlier this year, I was given a boxed copy of Microsoft Operations Manager 2005 Workgroup Edition at a Microsoft event. This was not an evaluation copy but a fully operational boxed copy (I guess the thinking at Microsoft was that once I’m hooked with a maximum of 10 devices to manage, then I’ll go out and buy a grown-up version). It’s been sitting there waiting for me to install it for some time now, and yesterday I finally got around to doing it.

    Installation was straightforward enough as the setup program has a pre-requisites checker and once I’d upgraded Windows Server 2003 to an application server (including ASP.NET), installed an instance of the Microsoft SQL Server 2000 Desktop Engine (MSDE) with SP3A, then set the startup type for the Background Intelligent Transfer Service, MSSQL$MSDEinstancename and SQLAgent$MSDEinstancename services to automatic (and started them) it was just a case of following the setup wizard (creating a domain user account to act as the MOM management server action account, although I later relaxed the security and made this a domain administrator as I didn’t have a suitable method of adding the account to the local administrators group on each client).

    After I installed MOM, I needed to set up my clients. Setting up a computer discovery rule was easy enough, as was installing agents remotely on my Windows Server 2003 SP1 computers (they still have the Windows Firewall disabled – something which I aim to resolve soon), but the Windows XP SP2 computers just would not play. Every time I ran the Install Agent Wizard, I got the same result:

    The MOM Server failed to perform specified operation on computer “computername.domainname“.

    Error code: -2147023174
    Error description: the RPC server is unavailable.

    Microsoft knowledge base article 885726 gave some insight (along with articles 904866, 832017, and 842242) but even though this was on my home network, I wanted to apply enterprise principles – i.e. I didn’t want to disable the Windows Firewall or install agents manually.

    I spent hours (well into the night) applying different firewall exceptions via group policy, and even disabling the Windows Firewall completely (disabling and stopping the service) but remote installation just wasn’t working.

    Strangely (before I disabled the Windows Firewall), the MOM server was trying to contact my client on TCP port 139 (2005-10-27 00:33:38 OPEN-INBOUND TCP momserveripaddress clientipaddress 1744 139 – – – – – – – – –) so I even installed File and Printer Sharing for Microsoft Networks but all to no avail.

    I tried installing MOM service pack 1, but that made no difference (except that I had to approve agent upgrades for my existing MOM clients, taking them from version 5.0.2749.0 to version 5.0.2911.0).

    Eventually, I gave up and installed the agent manually, following the instructions in the troubleshooting section of the MOM SP1 installation guide but I find it difficult to believe that it is not possible to do this remotely, provided the correct firewall exceptions are in place. If anybody has any ideas (remember it didn’t even work with the firewall disabled!) then I’d be pleased to hear them.