WPtouch: WordPress On iPhone – A complimentary theme installed as a plugin on a WordPress blog or website that will format content with an Apple-inspired, full-featured theme when your visitors are using an iPhone or iPod touch.
I’m finally in the process of switching off the Compaq Evo D510SFF PC which acted as my main server for many years until it was replaced earlier this year with some more suitable hardware (a Dell PowerEdge 840). Even though the Dell Server has been running for the last ten months, I’ve not found the time to move over a few critical services and, as I write this, the files are being transferred to my new Netgear ReadyNAS and the last two VMs are being converted for use with Hyper-V.
There were a couple of infrastructure services to transfer too – DNS and DHCP. One of the DHCP services that I wanted to run in my new infrastructure is to provide IP addresses to computers that are deliberately on a different network to my Active Directory (devices like my iPhone, the Cisco IP Phone on my desk, and guest computers using my Wi-Fi connection) but the DHCP server in Windows Server 2003 R2 wouldn’t serve clients until it had been authorised by Active Directory. I didn’t want the DHCP server to even see AD (there is a firewall between them) but so I had to find a way to make Windows think that the server is authorised.
It turns out that this occurs if the DHCP Server service is running on a workgroup server and it sees a domain-joined DHCP server on the network (for a few days during the transition, my clients could see the legacy, domain-joined, DHCP server and the new, workgroup-only, one on the same network). The answer is to create a new registry value to disable rogue detection:
I can’t always answer e-mails for help through this blog (I simply don’t have the time) but, a few days ago, I received an e-mail from a reader with a question that intrigued me – at what time did Windows update the system clock when British Summer Time ended last weekend? I knew that the official end time was 02:00 (and it’s 01:00 when the clocks go forward again in the spring) but there was nothing in the logs to indicate the time when Windows applied the changes.
After a bit of research I found that this information is written to the registry when the time zone is selected at setup time, or via the Control Panel date and time applet. On my Windows Server 2008 system, reg query hklm\system\currentcontrolset\control\timezoneinformation returned:
Although I can’t work out the format of the binary data, I can see the differences. Ignoring the second 8 bytes (which is all zero) I can see that the differences are:
00 00 03 00 05 00 01 00
00 00 0A 00 05 00 02 00
That looks to me like 03 might be March and 0A (decimal 10) might be October, whilst and 01 may represent 1am in which case 02 would be 2am.
There’s also some more information in Microsoft knowledge base article 914387 including a link to the Windows Time Zone Editor (tzedit.exe) utility that is used to create timezones (as well as to another utility for combing event logs). The Windows Time Zone Editor is written for Windows 2000 and I’ve not been able to get the resulting time zone that I defined to load on my 64-bit copy of Windows Server 2008 in order to work out the resulting registry changes but I’m pretty sure that the 05 in the registry values for daylightstart and standardstart represents the last day (the options are 1st, 2nd, 3rd, 4th and last) and that one of the 00s is Sunday. One thing that it does do is to confirm that daylight saving for Greenwich Mean Time (British Summer Time) starts on the last Sunday of March at 01:00 and ends on the last Sunday in October at 02:00 – as shown in the accompanying image.
If anyone knows any more about the bytes I haven’t tracked down yet, I’d be pleased to hear your comments!
Sadly, this blog has failed to grab enough attention for me to be given any advanced information about the product (I guess there are a lot more bloggers and journalists for Microsoft’s Windows client guys to cover and this blog is just too small to make the cut) and what I do have access to via the beta programme is under NDA… grrr…
I’m supposed to be taking a week off work, but the announcements coming out of Microsoft’s PDC have the potential to make a huge impact on the way that we experience our IT. So, it’s day 2 of PDC and I’ve spent the afternoon and evening watching the keynote and blogging about new developments in Windows…
Yesterday I wrote about Ray Ozzie’s PDC keynote during which the Windows Azure services platform was announced. Today, he was back on stage – this time with a team of Microsoft executives talking about the client platform, operating system and application innovations that provide the front end user experience in Microsoft’s vision of the future of personal computing. And, throughout the presentation, there was one phrase that kept on coming back:
PC, phone and web.
Over the years, PCs have changed a lot but the fundamental features have been flexibility, resilience and adaptability to changing needs. Now the PC is adapting again for the web-centred era.
Right now, the ‘net and a PC are still two worlds – we’ve barely scratched the surface of how to take the most value of the web and the personal computer combined.
PC, phone, and web.
Ozzie spoke of the most fundamental PC advantage being the fact that the operating system and applications are right next to the hardware – allowing the user experience to take advantage of multiple, high resolution, screens, voice, touch, drag and drop (to combine applications), storage (for confidentiallity, mobility, and speed of access) so that users may richly create, consume, interact with, and edit information. The PC is a personal information management device.
The power of the web is its global reach – using the ‘net we can communicate with anyone, anywhere and the Internet is “every company’s front door” – a common meeting place. The unique value of the web is the ability to assemble the world’s people, organisation, services and devices – so that we can communicate, transact and share.
Like PCs, phone software is close to the hardware and it has full access to the capabilities of each device – but with the unique advantage is that it’s always with the user – and it knows where they are (location) and at what time – providing spontaneity for capture and delivery of information.
Microsoft’s vision includes applications that spans devices in a seamless experience – harnessing the power of all three access methods.
PC, phone and web.
“We need these platforms to work together and yet we also want access to the full power and capabilities of each”
[Ray Ozzie, Chief Software Architect, Microsoft Corporation]
I won’t cover all of the detail of the 2-and-a-half hour presentation here, but the following highlights cover the main points from the keynote.
Steven Sinofsky, Senior Vice President for Microsoft’s Windows and Windows Live Engineering Group spoke about how Windows 7 and Server 2008 R2 share the same kernel but today’s focus is on the client product:
Sinofsky brought Julie Larson-Green, Corporate Vice President, Windows Experience on stage to show off the new features in Windows 7. Windows 7 is worth a blog post (or few) of its own, but the highlights were:
User interface enhancements, including new taskbar functionality and access to the ribbon interface for developer.
Jump lists (menus on right click) from multiple locations in the user interface.
Libraries which allow for searching across multiple computers).
Touch capabilities – for all applications through mouse driver translation, but enhanced for touch-aware applications with gestures and a touch-screen keyboard.
DirectX – harnessing the power of modern graphics hardware and providing an API for access, not just to games but also to 2D graphics, animation and fine text.
And, of course, the fundamentals – security, reliability, compatibility and performance.
Windows Update, music metadata, online help are all service-based. Windows 7 makes use of Microsoft’s services platform with Internet Explorer 8 to access the web. Using technologies such as those provided by Windows Live Essentials (an optional download with support for Windows Live or third party services via standard protocols), Microsoft plans to expand the PC experience to the Internet with software plus services.
PC, phone and web.
“We certainly got a lot of feedback about Windows Vista at RTM!”
[Steven Sinofsky, Senior Vice President, Microsoft Corporation]
Sinofsky spoke of the key lessons from the Windows Vista experience, outlining key lessons learned as:
Readiness of ecosystem – vendor support, etc. Vista changed a lot of things and Windows 7 uses the same kernel as Windows Vista and Server 2008 so there are no ecosystem changes.
Standards support – e.g. the need for Internet Explorer to fully support web standards and support for OpenXML documents in Windows applets.
Compatibilty – Vista may be more secure but UAC has not been without its challenges.
Scenarios – end to end experience – working with partners, hardware and software to provide scenarios for technology to add value.
Today, Microsoft is releasing a pre-beta milestone build of Windows 7, milestone 3, which is not yet feature complete.
In early 2009, a feature complete beta will ship (to a broader audience) but it will still not be ready to benchmark. It will incorporate a feedback tool which will package the context of what is happening along with feedback alongside the opt-in customer experience improvement program which provides additional, anonymous, telemetry to Microsoft.
There will also be a release candidate before final product release and, officially, Microsoft has no information yet about availability but Sinofsky did say that 3 years from the general availability of Windows Vista will be around about the right time.
Next up was Scott Guthrie, Corporate Vice President for Microsoft’s .NET Developer Division who explained that:
Windows 7 will support .NET or Win32 client development with new tools including new APIs, updated foundation class library and Visual Studio 2010.
Microsoft .NET Framework (.NET FX) 3.5 SP1 is built in to Windows 7, including many performance enhancements and improved 3D graphics.
A new Windows Presentation Framework (WPF) toolkit for the .NET FX 3.5 SP1 was released today for all versions of Windows.
.NET FX 4 will be the next version of the framework with WPF improvements and improved fundamentals, including the ability to load multiple common language runtime versions inside the same application.
Visual Studio 2010 is built on WPF – more than just graphics but improvements to the development environment too and an early CTP will be released to PDC attendees this week.
In a demonstration, Tesco and Conchango demonstrated a WPF client application for tesco.com aiming to save us money (every little helps) but to spend more of it with Tesco! This application features a Tesco at home gadget with a to do list, delivery and special offer information and providing access to a “corkboard”. The corkboard is the hub of familiy life, with meal planning, calendar integration, the ability to add ingredients to the basket, recipes (including adjusting quantities) and, calorie counts. In addition, the application includes a 3D product wall to find an item among 30,000 products, look at the detail and organise products into lists, and the demonstration culminated with Conchango’s Paul Dawson scanning a product barcode to add it to the shopping list.
Windows 7 also includes Internet Explorer 8 and ASP.NET improvements for web developers. In addition, Microsoft claims that Silverlight is now on 1 in 4 machines connected to the Internet, allowing for .NET applications to run inside the browser.
Microsoft also announced the Silverlight toolkit with additional controls on features from WPF for Silverlight 2 as a free of charge toolkit and Visual Studio 2010 will include a Silverlight designer.
David Treadwell, Corporate Vice President, Live Platform Services spoke about how the Live Services component within Windows Azure creates a bridge to connect applications, across devices:
PC, phone and web.
The core services are focused around identity (e.g. Live ID as an openID provider), directory (e.g. the Microsoft services connector and federation gateway), communications and presence (e.g. the ability to enhance websites with IM functionality) and search and geospacial capabilities.
These services may be easily integrated using standards based protocols – not just on a Microsoft .NET platform but invoke from any application stack.
Microsoft has 460 million Live Services users who account for 11% of total Internet minutes and the supporting infrastructure includes 100,000s of servers worldwide.
We still have islands of computing resources and Live Mesh bridges these islands with a core synchronisation concept but Mesh is just the tip of the iceberg and is now a key component of Live Services to allow apps and websites to connect users, devices, applications and to provide data synchronisation.
The Live Service Framework provides access to Live Services, including a Live operating environment and programming model.
Ori Amiga, Group Program Manager – demonstrated using Live Framework to extend an application to find data on multiple devices, with contact integration for sharing. Changes to the object and its metadata were synchronised and reflected on another machie without any user action and a mobile device was used to added data to the mesh, which sychronised with other devices and with shared contacts.
Anthony Rhodes, Head of Online Media for BBC iPlayer (which, at its peak, accounts for 10% of the UK’s entire Internet bandwidth) spoke of how iPlayer is moving from an Internet catchup (broadcast 1.0) service to a model where the Internet replaces television (broadcast 2.0) using Live Mesh with a local Silverlight application. Inventing a new word (“meshified”), Rhodes explained how users can share content between one another and across devices (e.g. watch a program on the way to work, resuming playing from where it left off on the computer).
In the final segment, before Ray Ozzie returned to the stage, Takeshi Numoto, General Manager for the Office Client spoke of how Microsoft Office should be about working the way that users want to:
Numoto announced Office web applications for Word, Excel, OneNote and PowerPoint as part of Office 14 and introduced the Office Live Workspace, built on Live Services to allow collaboration on documents.
In a demonstration, a document was edited without locks or read only access – each version of the document was synchronised and included presence for collaborators to reach out using e-mail, instant messaging or a phone call. Office web applications work in Internet Explorer, Firefox or Safari and are enhanced with Silverlight. Changes are reflected in each collaborator’s view but data may also be published to websites (e.g. a Windows Live Spaces blog) using REST APIs so that as the data changes, so does the published document, extending office documents onto the web.
Office Web apps are just a part of Office 14 and more details will be released as Office 14 is developed.
Numoto summarised his segment by highlighting that the future of productivity is diversity in the way that people work – bringing people and data together in a great collaboration experience which spans…
PC, phone and web.
In effect, software plus services extends Office into connected productivity. In a direct reference to Google Apps, Microsoft’s aspirations are about more than just docs and speadsheets in a browser accessed over the web but combine to create an integrated solution which provides more value – with creation on the PC, sharing and collaboration on the web and placing information within arms reach on the phone. Seamless connected productivity – an Office across platform boundaries – an office without walls.
I don’t know about you, but I’m getting confused with all the Windows blogs coming out of Microsoft – last week I wrote about two new Windows 7 blogs (one for developers and another for IT pros) and those who are watching the Windows Vista Team blog may have noticed that it moved to a new site today.
I wasn’t planning any major PDC coverage on the blog this week (there will be plenty of that elsewhere) but I did catch the PDC keynote today. It was the first time I’ve seen Ray OzzieÂ present and I was impressed – none of the Ballmer madness, or the Gates geekiness. Instead, over two hours, backed up with key executives from throughout Microsoft, Ozzie gave a calm and inspiring presentation which in whichÂ we finally found out some of the detail behindÂ where Microsoft is headingÂ – and how software plus servicesÂ is going to transform Windows.
Today’s keynote was focused on the back-end – the platform which will be needed to run our datacentres in a world of cloud computing and key points that I picked up on were:
Most enterprise computing architectures have been designed for inward-facing solutions whilst the reach and scope is expanding as part of the “the externalisation of IT”. Regardless of the industry, the web has become a key demand generation mechanism – “every organisation’s front door” – and companies now need to serve external users.
Software development and operations have become intertwined – developers and IT professionals need to jointly learn how to design, build and develop systems.
Organisations over-engineer infrastructure to ensure that there is sufficient capacity (computing, storage, network, power) with multiple datacentres for continuity and the complexity that this introduces.
The world of the web needs a different approach to designing a platform. Microsoft has many systems that serve millions of users worldwide – and has used the common expertise from this experience to shape its cloud computing strategy and package Microsoft’s own experiences from managing the externalisation of IT:
Tier 1 is experience: the PC on the desk or the phone in your pocket
Tier 2 is enterprise: back end infrastructure hosting systems – with the scale of the enterprise.
Tier 3 is externally facing: the web tier of computing – with the scale of the web and is named Windows Azure – a new service based operating environment for the cloud.
Azure is Windows so it will remain familiar and developer-friendly but it also needs to be different. Rather than being rooted in a scale-up model, it embraces new model-based methods for a world of horizontal scale.
It is a service – not software. It is being released as a CTP today, with initial features that are just a fraction of where it will be going. Designed for iteration and continuous improvements and as the system scales out, Microsoft will bring more and more of its own services onto Azure.
The platform includes Windows Live Services, .NET Services, SQL Services, SharePoint Services and Dynamics CRM Service.
Amitabh Srivastava, Corporate Vice President for cloud infrastructure services, explained that:
The original Windows NT architect, Dave Cutler, is the kernel man behind Windows Azure. Kernels don’t demonstrate well but a good kernel allows others to build killer apps.
Windows Azure is an operating system for the cloud – it manages entire global datacentre infrastructure – and provides a layer of abstraction to ease the programming burden.
A fabric controller maintains the health of the service. When a service is changed, specify desired end state and the fabric manages services, not just servers. Windows Azure is based on a service model, with roles and groups, channels and endpoints, interfaces, and configuration settings – all stored as XML for manipulation with any tool.
When deploying toWindows Azure, there are two things for a developer to provide:
The code for a service.
A service model defining architecture to guide fabric controller to automatically manage the lifecycle of the application.
Windows Azure provides 24×7 availability, with all components built to be highly available under varying loads with no user intervention. This allows a highly available service to be provided using the Azure subsystem, orchestrated by the fabric and deveopers can concentrate on the business application logic.
Existing tools transfer to the cloud and Windows Azure works with managed and native code. Steve Marx demonstrated new cloud templates in Visual Studio using standard ASP.NET development skills to create a “hello cloud” application. The cloud may be simulated in an offline scenario so there is no need to deploy an application to the cloud in order to test its functionality.
Publishing involves repackaging the application for deployment and using the Windows Azure Developer Portal to create a hosted service with a friendly DNS name, supplying the package and configuration files.
Windows Azure is an open platform with a command line interface, REST protocols and XML file formats, as well as managed code support – making it easy to integrate with other platforms.
In summary, Windows Azure is an operating system for the cloud, providing scalable hosting,Â automated service management, and a familiar developer experience for enterprise and hobbiests alike.
Bob Muglia, Senior Vice President for server and tools, spoke of a next generation, services platform looking back at the various models used over the years:
Monolithic – 1970s mainframes.
Client server – 1980s PC revolution.
Web – a new generation of Internet and intranet applications developed in the 1990s.
SOA – the web services used today, communicating over standard protocols (web services or REST).
Services – going forward, building on web and SOA but with improved scalability.
He went on to discuss:Â
A new product (codenamed Geneva) which provides a link between Active Directory and cloud services.
System Center Atlanta – a portal to provide administrators with access to information about their systems in the cloud – connecting on-premise SCOM to Azure databases using a service bus.
Knowledge and skills transfer between on-premise enterprise computing and cloud-based architectures and of how Microsoft is working with partners to take Azure developments and incorporate them into Windows Server, SQL Server etc., so the industry can provide its own Azure services.
A next generation modelling platform (codenamed Oslo) which enables consistency between IT and developer processes (built on previous dynamic IT developments) using a new language called M.
Muglia summarised by pointing out that, at a previous PDC in 1992, Windows NT was introduced and it now has a huge presence. As services become more broadly used, Microsoft expects Azure to have the same sort of impact. Dave Thompson, Corporate Vice President for Microsoft Online, spoke of how:
Customers with strong IT staff and discipline find it straightforward to deploy software but many others see IT as a frustrating burden – essential but not core to their business.
Microsoft Online provides enterprise class software as a subscription service, hosted by Microsoft and sold with partners.
In the future all Microsoft enterprise software will optionally be delivered as an online service.
Software plus services provides the power of choice. Generally, enterprises don’t want all cloud services, or all on-premise computing but a hybrid must be seamless and easy for administrators – federated identity is one challenge and extensibility is another.
With Windows Azure, IT administrators manage Active Directory as they do now and the Microsoft services connector links into the cloud, to the Microsoft federation gateway. Users use the federation gateway but do not know if the service they access is on-premise or in the cloud.
Extensibility is facilitated with the integration of online services with on-premise servers, sharing and accessing shared data using a variety of flexible presentation methods. Windows Azure components in business applications allow services to be extended as required.
Ray Ozzie returned to the stage to wrap up Microsoft’s view of the software plus services world. He was very clear in explaining once more that Windows Azure is a community technology preview and that there will be no charges for its use during preview period. As the service moves closer to commercial release, Microsoft will unlock access to more and more capabilities and the business model at launch will be based on a combination of resource consumption and service level.
I really do hope that Windows Azure does not pass the way of previous efforts to provide online services for enterprises (Microsoft Passport was supposed to be the solution for web services authentication) but I have a feeling it will not. Google, Amazon and others have proved the demand for cloud computing but Microsoft has a credible hybrid model, with a mixture of on-premise and services-led software access.
In case you hadn’t noticed, it’s Microsoft’s conference season – PDC this week, WinHEC next, TechEd EMEA the two weeks after that… lots of announcements – and I’m missing them all!
Luckily, last week I got the chance to catch up with Ward Ralston (a Group Technical Product Manager in Microsoft’s Windows Server Product Group) and he gave me the rundown on what to expect from Windows Server 2008 R2.
For those who are not familiar with Microsoft’s release cycles for server operating systems, ever since Windows Server 2003, the company has aimed to release a major update every 4-5 years with an interim second release (R2) in between. Windows Server 2003 and Windows Server 2003 R2 share the same basic code but R2 includes SP1 and new functionality. Similarly, I would expect Windows Server 2008 R2 to include SP2 and it certainly has some goodies for us.
One of the reasons for an interim release is to take advantage of new hardware advances and changes in the overall IT market and one significant point to note is that Windows Server 2008 R2 will be 64-bit only. That’s right – no more 32-bit server operating system – and that is A Good Thing. We all have 64-bit hardware (and have had for some time) but many IT administrators don’t realise it, and install 32-bit operating systems even though driver support is no longer an issue (at least for servers) and most 32-bit applications will run quite happily on a 64-bit operating system.
The main themes for the Windows Server 2008 R2 release are: improved hardware, driver and application support; taking advantage of ever-increasing numbers of logical processor cores and new power management features; improvements around virtualisation, power management and server management; new technologies to lay the foundation for the next version of Windows; and a unified release focus – with the Windows 7 client and Windows Server 2008 R2 providing engineering efficiencies to work “better together”.
There are many new features in Windows Server 2008 R2 and, first of all, is the area of most interest to me – virtualisation. Windows Server 2008 R2 includes the second release of Hyper-V with new features including:
Live Migration to allow virtual machine workloads to fail over between cluster nodes with no discernable break in service. I still argue that this is not a feature that organisations need (cf. want) for their server infrastructure but as the dynamic datacentre and virtual desktop infrastructures (VDIs) become more commonplace, it makes sense to support this functionality with Hyper-V (besides the fact that competitors can already do it!).
A new clustered shared volume file system (codenamed Centipede) which sits on top of NTFS and allows multiple cluster nodes to access the same storage.
Support for 32 logical processors (cores) on the host computer (twice the original limit with Hyper-V), paving the way for support of 8-core CPUs and improved consolidation ratios.
Hot-addition and removal of storage (allowing VHDs and pass-through disks on a SCSI controller to be added to a virtual machine without a reboot).
Second level translation (SLAT) – moving past Intel-VT and AMD-V to take advantage of new processor features (Intel Nested Page Tables and AMD Enhanced Page Tables), further reducing the hypervisor overhead.
Boot from VHD – using a kernel-level filter to take a virtual hard disk and boot from it on hardware – even without hardware support for virtualisation.
Microsoft also spoke to me about a dynamic memory capability (just like the balloon model that competitors offer). I asked why the company had been so vocal in downplaying competitive implementations of this technology yet was now implementing something similar and Ward Ralston explained to me that this is not the right solution for everyone but may help to handle memory usage spikes in a VDI environment. Since then, I’ve been advised that dynamic memory will not be in the beta release of Windows Server 2008 R2 and Microsoft is evaluating options for inclusion (or otherwise) at release candidate stage. These apparently conflicting statements, within just a few days of one another, should not be interpreted as indecisiveness on the part of Microsoft – we’re not even at beta stage yet and features/functionality may change considerably before release.
Looking at some of the other improvements that we can expect in Windows Server 2008:
On the management front: there is a greater emphasis on the command line with improved scripting capabilities with PowerShell 2 and over 200 new cmdlets for server roles as well as power, blade and chassis management – working with vendors to deliver hardware which is compatible with WS-Management – and new command line tools for migration of Active Directory, DNS, DHCP, file and print servers; Server Manager will support remote connections, with a performance counter view and best practices analyzer (similar to the ones which we have seen shipped for server products such as Exchange Server for a few years now); and a new migration portal will expose step-by-step documentation for migration of roles and operating system settings from Windows Server 2003 and 2008 servers to Windows Server 2008 R2.
Power management was an improvement in Windows Server 2008 and R2 is intended to take this further with features such as core parking to reduce multi-core process power consumption (only using the power required to drive a workload) as well as centralised control of power policies (allow servers to throttle-down during quiet time, using DMTF-compliant remote management interfaces).
Active Directory Domain Services is improved with: a new management console (with PowerShell integration) to replace the disparate tools that have existed since early NT 5.0 betas; a new AD recycle bin to aid with recovering deleted objects; improved support for offline domain joins (similar to the pre-staging support used in Windows Server 2008 for RODCs); improved management of user accounts and identity services (manage service accounts); and improved authentication assurance in Active Directory Federated Services.
IIS continues to improve with: server core support for ASP.NET; an integrated PowerShell provider (more than 50 new cmdlets); integrated FTP and WebDAV support (previously provided as extensions); new IIS Manager modules (e.g. to support new FTP, WebDAV, request filtering and ASP.NET functionality); configuration logging and tracing (building on IIS 7.0’s feature delegation functionality by providing the ability to centrally log and audit changes made by site managers and web developers); and extended protection and security (channel-binding tokens to prevent man-in-the-middle attacks, hardened accounts to prevent application spoofing, and improved management for custom service accounts).
Scalability and reliability improvements with: improved multi-processor support, reduced Hyper-V overhead and improved storage performance; greater componentisation – server core installations will support more roles and will also support ASP.NET within IIS as Microsoft.NET Framework support will be added (which also allows PowerShell to run on server core installations); DHCP failover, with the ability to pair DHCP servers as primary and secondary servers (based on an IETF draft for the DHCP Failover protocol); and DNS Security, using DNSSec to validate name resolution and zone transfers using PKI to secure DNS records (preventing the interception of DNS queries and return of illegitimate responses from an untrusted DNS server – a real issue with huge potential impact across multiple platforms that was recently highlighted by security researcherÂ Dan Kaminsky).
Finally, whilst there has always been a good, better, best story for integrating the latest client and server releases with Microsoft products, Microsoft is really pushing “better together with Windows 7” with the Windows Server 2008 R2 marketing. New features like Direct Access and Branch Cache are intended to take existing connectivity technologies and couple them in a less complex manner, connecting routed VPNs over firewall-friendly ports with end-to-end IPSec whilst improving branch office performance by caching HTTP and SMB traffic. Read-only DFS improves branch office security (in the same way that read-only domain controllers did for Windows Server 2008). Then there’s more efficient client power management, BitLocker encryption on removable drives and the new DHCP Failover and DNSSec functionality mentioned previously – I’m sure as we learn more about Windows 7 the list will continue to grow.
So, when do we get to use all this Windows Server 2008 R2 goodness? Well, Microsoft is not yet ready to release a beta and, based on previous versions of Windows Server, I would expect to see at least two betas and a couple of CTPs before the release candidates – but the product team is currently not committing to a date – other than to say “early 2010” (which, incidentally, will be 2 years after Windows Server 2008 shipped). They’re also keen to point out that, although Windows Server 2008 R2 is being jointly developed with the Windows 7 client operating system, there are no guarantees that the two will release together – maybe they will, maybe they won’t – read into that what you like, butÂ some are predicting a late-2009 release for Windows 7Â and I would expect the server product to follow a few months after that. No-one needs to get a new server operating system out in time for the holiday season but they do want it to be rock solid.
Of course, at this early stage in product development, there could still be a number of changes before release. Even so, with these new features and functionality, Windows Server 2008 R2 is certainly not just an insignificant minor release.