Windows Server 2008 R2 release candidate: what’s new? (part 1)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Windows Server 2008 R2 logoLast year, I wrote a post about some of the things to look forward to in Windows Server 2008 R2 and, a week or so later, I was able to follow it up with the news that Terminal Services gets a big improvement as it becomes Remote Desktop Services (RDS). Six months have gone by, we’ve had the beta, and now the release candidate is here… and that release candidate has some new features – mostly relating to performance and scalability:

  • Looking first at the improvements to Hyper-V (in addition to those in last week’s post on the R2 wave of virtualisation products):
    • There are networking improvements with VM Chimney/TCP Offload capabilities whereby network operations are redirected to the physical NIC (where the NIC supports this), reducing the CPU burden and improving performance. The original version of Hyper-V supported chimney operations in the parent, but virtual machines could not take advantage of the functionality. This helps Hyper-V to scale as 10Gbps Ethernet becomes more common (a Hyper-V host can already saturate a Gigabit Ethernet connection if required) but it’s worth noting that not all applications can benefit from this as it’s more suitable for large file transfers (file servers, etc.) rather than web servers.
    • Another new Hyper-V networking feature is NIC direct memory access (NIC DMA), which shortens the overall path length from a physical NIC queue to virtual machine, resulting in further performance improvements. Because each NIC queue is assigned to a specific virtual NIC there’s still no sharing of memory (so no impact on security isolation) but direct access to virtual machine memory does avoid copies in the VSP and route lookups in the virtual switch; however this feature is disabled by default (as the only real benefit is found with 10Gbps Ethernet and only a few NICs currently have the capability to process it).
    • The long-awaited live migration functionality is definitely in (it was also in pre-release versions of the Hyper-V but was pulled before release). Windows Server 2008 R2’s clustered shared volumes are instrumental to making this feature work well and, even though I don’t believe it’s entirely necessary, VMware have had the functionality for several years now and Microsoft needs to be able to say “me too”.
    • Sadly, another “me too” feature (dynamic memory) has definitely been dropped from the R2 release. I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.
  • Away from Hyper-V there are further performance and scalability improvements in the operating system, with support for up to 256 logical CPUs, improved scheduling on NUMA architectures, and support for solid state disks. As well as the power management improvements I mentioned in my original post last October, the operating system uses less memory and networking improvements result in improved file transfer speeds on the LAN, whilst new multi-threaded capabilities in robocopy.exe (using the /mt switch) can provide up to an 800% improvement in WAN file transfers. Putting these improvements into practice, Microsoft told me that one OLTP benchmark for SQL Server showed a 70% improvement by moving from 64 to 128 processors and a file server throughput test showed a 32% improvement just by upgrading the operating system from Windows Server 2008 to Windows Server 2008 R2. Indeed, Microsoft is keen to show off these improvements at TechEd next month (together with System Center products being used to manage and cap power usage) and they will also announce a new power logo as an additional qualification for the Windows Server logo programme. Some of the power improvements will be back-ported to Windows Server 2008 SP2, although that operating system still won’t quite match up to R2.

None of these are big features but they have the potential to make some significant differences in the efficiency of an organisation’s Windows Server estate – an important consideration as economic and environmental pressures affect the way in which we run our IT systems. This isn’t the whole story though as Microsoft still has a few more surprises in this release candidate. With the RC code available to TechNet and MSDN subscribers today, I’m not sure how Microsoft is planning on keeping them quiet but, for now, my lips are sealed so stay tuned for part 2…

Windows 7 and Windows Server 2008 R2 release candidate availability

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s been a lot of chatter on the ‘net about Windows 7 release dates and new features but a lot of it is based on one or two leaks that then get reported (and sometimes misreported) across a variety of news sites and blogs.

After various reports that we could see a Windows 7 release candidate (RC) earlier in April, and various leaked builds, today’s the day when the Windows 7 and Windows Server 2008 R2 RCs will officially be made available to MSDN and TechNet subscribers (the client release candidate was announced last week and the official announcement around the Windows Server 2008 R2 release candidate is due today).

For those who are not TechEd or MSDN subscribers, the RC will be available to the public on/around 5 May.

Whilst the Windows 7 client was already feature complete at the beta, the server version, Windows Server 2008 R2, includes some new functionality – some of which I’ll detail in a separate blog post and some of which will not be announced until TechEd on 11 May 2009.

If you want to know more about the Windows 7 release candidate, then Ed Bott has a Windows 7 release candidate FAQ which is a good place to start. One thing you won’t find in there though is a release date for Windows 7, as Bott quotes one Microsoft executive:

“Those who know, won’t say. Those who say, don’t know.”

As for the future of Windows Mary Jo Foley reported last week that work is underway on “Windows 8” and is suggesting it could be with us as early as 2011/2. If Microsoft continues the 2-year major/minor cycles for the server version and co-develops the Windows client and server releases again, that would fit but, for now, let’s concentrate on Windows 7!

Finally, Microsoft has a new website launching tomorrow (but which has been available for a few days now) aimed at IT professionals in the Windows space. If you find the Engineering Windows 7 blog a little wordy (sometimes I wish they would stick to the Twitter rule of 140 characters!), Talking About Windows is a video blog which provides insight on Windows 7 from the Microsoft engineers who helped build the product, combined with real-world commentary from IT professionals.

Windows Vista and Server 2008 SP2 goes RTM… but you can’t get it yet

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Not to be confused with Windows Server 2008 R2, Windows Vista and Server 2008 service pack 2 (SP2) was released to manufacturing yesterday, the same day the the blocker tool for Windows Vista SP1 was removed (Windows Server 2008 shipped with SP1 included).

Full details of the service pack may be found in Microsoft knowledge base article 948465 and there’s also a notable changes page on the Windows Client TechCenter. In addition, Microsoft knowledge base article 969707 gives details of some of the applications that might have problems after installing the service pack.

[Update 30 April 2009: There’s no download link yet – the official line is that public availability is expected later this quarter. TechNet and MSDN subscribers can now download SP2 and I’d expect to see a public download link at the Windows Client TechCenter soon.]

Windows 7 “XP Mode”

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week was a frustrating one… you see, earlier this month Paul Thurrott gave a hint about an exciting Windows 7 secret. I put 2 and 2 together and it seems that I came up with 4. The trouble was that I was given the details from an official source at around the same time – and that information was under NDA so I couldn’t write about it here!

It’s times like this that I’m glad I’m not running a news site and waiting for a “scoop”, but all three of the leading tech journalists covering Windows (i.e. Paul Thurrott, Ed Bott and Mary Jo Foley) have written articles in the last few days about Windows 7 XP Mode and Windows Virtual PC, and I want to pull things together here.

Basically, Paul Thurrott and Rafael Rivera are reporting that there will be a new version of Virtual PC, available as a download for Windows 7, including a licensed copy of Windows XP SP3 to run those applications that don’t behave well on the Vista/Windows 7 codebase. More details will follow (it won’t actually be “in the box” with Windows 7) but Ed Bott has commented that it looks an awful lot like MED-V.

Of course, the technology is already there – as well as drawing comparisons with MED-V, Ed Bott points out that you can do something similar with VirtualBox in seamless mode and the key detail with Windows XP Mode is the licensing situation. Full licensing details have yet to be announced but the only Microsoft blog post I’ve seen on the subject says:

“We will be soon releasing the beta of Windows XP Mode and Windows Virtual PC for Windows 7 Professional and Windows 7 Ultimate”

That reference to Professional and Ultimate would also indicate that it will run on Enterprise (virtually identical to Ultimate), but not Starter, Home Basic or Home Premium. As Microsoft’s main concern is allowing businesses to run legacy applications as they are weaned off XP, that seems fair enough but, then again, MED-V is only available to volume license customers today and Mary Jo Foley suggests that could be the same for XP Mode – I guess we’ll just have to wait and see.

So, will this work? I hope so. Windows Vista (after SP1) was never as bad as its perception in the marketplace indicated but if ever you needed an example that perception is reality, then Vista was it! Strangely, Windows Server 2008 (the server edition of Vista SP1) has been well received as the solid, reliable operating system that it is, without the negative press. Windows 7 is a step forward in many ways and, as XP is now into its extended support phase, many organisations will be looking for something to move to but the application compatibility issues caused by Windows Vista and Windows 7’s improved security model will still cause a few headaches – that’s what this functionality is intended to overcome, although there will still be some testing required as to how well those old XP apps perform in a virtualised environment.

More technical details will follow soon, but either Paul Thurrott and Rafael Rivera are operating on a different NDA to me (which they may well be) or they feel pretty confident that Microsoft will still give them access to information as they continue to spill the beans on this particular feature…

Be careful what you wish for!

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the complaints I sometimes hear is that Microsoft doesn’t listen to its customers. I guess I can understand that – occasionally I feel that way about companies like Apple and VMware – I’m just fortunate enough to have good links into Microsoft when I need to get some information or provide some feedback. Sometimes criticism of Microsoft is valid but there are also times that its unfounded, or based on misinformation (some people just like to knock anything that Microsoft does… and just by writing that, I’ll probably get labelled as a fanboy).

Well, here in the UK, the Microsoft TechNet team is looking for new ways to engage with the community and to solicit feedback on Microsoft’s products and technologies. The principle is that IT Pros can submit their anonymous feedback on the UK TechNet website and Microsoft will periodically (monthly or quarterly based on traffic) analyse the comments to gain insights regarding systemic issues that are highlighted, before responding with the summarised feedback from the survey, together with Microsoft and MVP-identified resources (blog posts, articles, books, user groups, etc.) that may help to address those concerns.

Whether this takes off is yet to be seen – as I said, it’s still a pilot – but it’s a positive step. I wouldn’t expect to see sweeping changes made to products as a result (hey, I’ve been banging the “we need USB support for guests in Hyper-V” drum for long enough without any luck and I know I’m not the only one!) but it might just help you to identify that missing piece of information that is key to the success (or otherwise) of a project.

Just be careful what you wish for… you might get it!

Connecting to an iSCSI target using the Microsoft iSCSI Initiator

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve spent a good chunk of today trying to prepare a clustered virtual machine demonstration using the Microsoft iSCSI Initiator and Target.

I’ve done this before but only on training courses and I was more than a little bit rusty when it came to configuring iSCSI. It’s actually quite straightforward and I found Jose Barreto’s blog post on using the Microsoft iSCSI software target with Hyper-V a very useful guide (even though the Hyper-V part was irrelevant to me, the iSCSI configuration is useful).

The basic steps are:

  • Install the iSCSI Target (server) and Initiator (client) software (the Initiator is already supplied with Windows Vista and Server 2008).
  • Set up a separate network for iSCSI traffic (this step is optional – I didn’t do this in my demo environment – but it is recommended).
  • On the client(s):
    • Load the iSCSI Initiator, and answer yes to the questions about starting the iSCSI Service automatically and about allowing iSCSI traffic through the firewall.
    • Examine the iSCSI Initiator properties and make a note of the initiator name on the General page (it should be something like iqn.1991-05.com.microsoft:clientdnsname).
    • On the Discovery page, add a target portal, using the DNS name or IP address of the iSCSI Target and the appropriate port number (default is 3260).
  • On the iSCSI server:
    • Create a new target, supplying a target name and the identifiers of all the clients that will require access. This is where the IQNs from the initiators will be required (you can also use DNS name, IP or MAC address but IQN is the normal configuration method).
    • Add one or more LUN(s) to the target (the Microsoft implementation uses the virtual hard disk format for this, so the method is to create a virtual disk within the iSCSI Target console).
    • Make a note of the IQN for the target on the Target properties and ensure that it is enabled (checkbox on the General page).
  • Back on the client(s):
    • Move to the Target properties page and refresh to see the details of the new target.
    • Click the Log on button and select the checkbox to automatically restore the connection when the computer starts.
    • Bring the disk online and initialise it in Disk Management, before creating one or more volume(s) as required.

After completing these steps, the iSCSI storage should be available for access as though it were a local disk.

It’s worth noting that the Microsoft iSCSI Target is not easy to come by (unless you have access to a Windows Storage Server). It is possible to get hold of an evaluation copy of Storage Server though and Jose explains how to install this in another blog post. Alternatively, you can use a third party iSCSI software target (it must support persistent reservations) or, even better, use a hardware solution.

How long was that walk? And how many calories did I burn in the process?

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Like many people these days, I live a pretty sedentry lifestyle. As my ages edges towards 40 (and I said the same thing when I was closer to 30 too!), I need to do something about my weight and overall fitness levels. Not eating half of the food on sale in the Marks and Spencer food hall each lunchtime would be a good start but exercise needs to fit in there too.

I do work from home a lot these days and, as I live in the countryside, I try to get out for a walk most days (it doesn’t always work out) but I’m back on the wagon this week (those who know me well also know how often I fall off this particular wagon), using Weight Loss Resources to track my calorie intake and exercise levels, as well as to follow the trend over time (hopefully downwards… towards my goal weight).

Getting some idea of how far I’ve walked is okay if I walk along roads (Google Maps is pretty useful for that part), but what if I go cross country, or take a shortcut between two residential streets? As it happens, there are some websites that use the Google mapping API to allow the plotting of routes either by road or as the crow flies and, with a few clicks of the mouse I can tell exactly how far I walked this evening. The site I found most useful was the GMaps Pedometer, which tells me how far I walked (in miles or kilometres), how many calories I burned in the process, what the elevation was, and even lets me export the map points in GPX format (GeoDistance was similar, but less fully featured and it didn’t like it when I deliberately retraced my steps).

Of course, I could just wear a pedometer (sadly it seems that the GPS in my iPhone 3G is not accurate enough to trace where I’ve been) but these websites are very useful to know about… now, if only I could find one that uses Ordnance Survey’s OpenSpace API

Windows 7 Starter Edition: let’s put it into perspective

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There seem to be a number of sites linking to a prominent “news” site that claims Windows 7 will be “crippled” on netbooks but… WTF? Are they serious, or just posting link bait?

Back in February, Microsoft announced the various editions that Windows 7 will be available in, including Starter Edition, which will only be offered pre-installed by OEMs and is recommended for price-sensitive customers with small notebook PCs.

Basically, that sounds like a low cost version for netbooks. – and key features were listed as:

  • Broad application and device compatibility with up to 3 concurrent applications.
  • Safe, reliable, and supported.
  • Ability to join a Home Group.
  • Improved taskbar and JumpLists.

Now someone has stirred things up and headline-grabbing tech-“journalists” (I use the term lightly… these are not the Mary-Jo Foleys, Ed Botts, or Paul Thurrotts who actually look at the technology and when researching stories but consumer-focused writers with a few press releases and 500 words to churn out for an editor who wants nothing more than a good headline) are saying how this will kill Windows 7 sales and open the netbook market to Linux. Yawn. Have I suddenly fallen foul of a cross-site scripting exploit and ended up reading Slashdot, or The Register? Nope. It seems I am still reading Computerworld, a site that seems to think words like Ed Bott or ZDNet turn my comment into spam!

It’s the three application limit that seems to have people up in arms but, according to Paul Thurrott in episode 103 of the Windows Weekly podcast and Ed Bott’s recent post on living with the limits of Windows 7 Starter Edition, the three application limit is not triggered by things like Explorer windows, Control Panel applets, system utilities or gadgets – this is three applications – not three Windows!

And, as I wrote when I bought one a few months back, netbooks are not for content creation but for ultra-mobile content consumption. You’re not going be doing much on a 10″ screen with a tiny keyboard! Not unless you want to end up with a bad repetitive strain injury.

Mary-Jo Foley reminds us that Home Premium is the default consumer version of Windows 7 – not Starter Edition. Who says that netbook OEMs will not provide Home Premium for those who want it?

Meanwhile, Ed Bott made a very good point when he wrote “Is this a netbook or a notebook? If the answer is netbook, you might be pleasantly surprised at what this low-powered OS can actually accomplish” but he also notes that, if he tried to use it as a conventional notebook, he “would probably be incredibly frustrated with the limitations of Starter Edition.” And Laptop magazine wisely commented that any comment has limited value until we know the price difference between a netbook with Windows 7 Starter Edition and the same netbook with Windows 7 Home Premium, a view which Mary-Jo Foley also puts forward in her post.

To me, it’s simple:

If I was a betting man, I’d wager that most netbook users fall into the latter category.

VMware launches vSphere

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier today, VMware launched their latest virtualisation platformvSphere. vSphere is what was once known as Virtual Infrastructure (VI) 4 and, for those who are unfamiliar with the name, the idea is that virtual infrastructure is more of a description of what VMware’s products do than a product name and the company is trying to put forward the message that, after more than 10 years of virtualisation (originally just running multiple operating systems on a workstation, then on a hypervisor, then moving to a virtual infrastructure), this fourth generation of products will transform the datacentre into a private “cloud” in what they refer to as an evolutionary step but a revolutionary approach.
VMware vSphere

Most of the launch was presented by VMware President and CEO, Paul Maritz, who welcomed a succession of leaders from Cisco, Intel, and HP to the stage in what sometimes felt like a bizarre “who’s who in The Valley” networking event, but had to be ousted from the stage by his own CTO, Stephen Herrod, in order to keep the presentation on schedule! Later on, he was joined by Michael Dell before VMware Chairman, Joseph M. Tucci closed down the event.

VMware President and CEO, Paul Maritz, speaking at the vSphere 4 launchFor a man whose career included a 14-year spell at Microsoft, where he was regarded as the number 3 behind Bill Gates and Steve Ballmer, it seemed odd to me how Maritz referred to the Redmond giant several times during his presentation but never by name – always as a remark about relative capabilities which seemed to indicate that, far from being a market leader that is comfortable with its portfolio, VMware is actually starting to regard Microsoft (and presumably Citrix too) as credible competition – not to be disregarded but certainly not to be named in the launch of a new version of their premier product!

Maritz also seemed to take a dig at some other vendors, as he cited IBM as claiming to have invented the hypervisor [my colleagues from ICL may disagree] but not having realised the potential, and referring to some clouds as “the ultimate Californian hotels” where you can check in but not check out as they are highly proprietary computing systems. I can only imagine that he’s referring to offerings for Amazon and Google here, as Microsoft’s Windows Azure is built on the same infrastructure that runs in on-premise datacentres – Windows Server and the .NET Framework, extended for the cloud – in just the same way that vSphere is VMware’s cloud operating system – be that internal, external or a hybrid and elastic cloud which spans on premise and service-based computing paradigms.

It’s all about the cloud

So, ignoring the politics, what is vSphere about? VMware view vSphere as a better way to build computing platforms, from small and medium businesses (SMBs) to the cloud. Maritz explained that “the cloud” is a useful shorthand for the important attributes of this platform, built from industry standard components (referring to cloud offerings from Amazon, Google, and “other” vendors – Microsoft then!), offering scalable, on-demand, flexible, well-managed, lights-out configurations. Whilst today’s datacentres are seen by VMware as pillars of complexity (albeit secure and well-understood complexity), VMware see the need for something evolutionary, something that remains secure and open – and they want to provide the bridge between datacentre and cloud, severing complex links, jacking up the software to separate it from the hardware and slide in new level of software (vSphere), whereby the applications, middleware and operating system see an aggregated pool of hardware as a single giant computing resource. Not just single machines but an evolutionary roadmap from today’s datacentres. A platform. An ecosystem. One which offers: compute; storage; network, security, availability, and management capabilities, extendable by partners.

If you take away the marketing rhetoric, VMware’s vision of the future is not dissimilar to Microsoft’s. Both see devices becoming less significant as the focus shifts towards the end user. Both have a vision for a cloud-centric services, backed up with on-premise computing where business requirements demand it. And both seem to believe the same analysts that say 70% of IT budgets today are spent on things that do not differentiate businesses from their competition.

Of course, VMware claims to be further ahead than their competition. That’s no surprise – but both VMware and Microsoft plan to bring their cloud offerings to fruition within the next six months (whilst VMware have announced product availability for vSphere, they haven’t said when their vCloud service provider partners will be ready; although Windows Azure’s availability will be an iterative approach and the initial offering will not include all of the eventual capabilities). And, whilst vSphere has some cool new features that further differentiate it from the Microsoft and Citrix virtualisation offerings, that particular technology gap is closing too.

Not just for the enterprise

Whilst VMware aim to revolutionalise the “plumbing”, they also claim that the advanced features make their solutions applicable to the low end of the market, announcing an Essentials product to provide “always on IT in a box”, using a small vSphere configuration with just a few servers, priced from $166 per CPU (or $995 for 3 servers)

Clients – not desktops

For the last year or so, VMware have been pushing VDI as an approach and, in some environments, that seems to be making some traction. Moving away from desktops and focusing on people rather than devices, VDI has become VMware View, part of the vClient initiative which takes the “desktop” into the cloud.

Some great new features

If Maritz’s clumsy Eagles references weren’t bad enough, Stephen Herrod’s section included a truly awful video with a gold disc delivered in Olympic relay style and “additional security” for the demo that featured “the presidential Blackberry”. It was truly cringe-worthy but Herrod did at least show off some of the technology as he talked through the efficiency, control, and choice marketing message:

  • Efficiency:
    • The ability to handle:
      • 2x the number of virtual processors per virtual machine (up from 4 to 8).
      • 2.5x more virtual NICs per virtual machine (up from 4 to 10).
      • 4x more memory per virtual machine (up from 64 GB to 255GB).
      • 3x increase in network throughput (up from 9 Gb/s to 30Gb/s).
      • 3x increase in the maximum recorded IOPS (up to over 300,000).
    • The ability to create vSphere clusters to build a giant computer with up to
      • 32 hosts.
      • 2048 cores.
      • 1280 VMs.
      • 3 million IOPS.
      • 32TB RAM.
      • 16PB storage.
    • vStorage thin provisioning – saving up to 50% storage through data de-duplication.
    • Distributed power management – resulting in 50% power savings during VMmark testing and allowing servers to be turned on/off without affecting SLAs. Just moving from VI3 to vSphere 4 should be expected to result in a 20% saving.
  • Control:
    • Host profiles make the giant computer easy to extend and scale with desired configuration management functionality.
    • Fault tolerance for zero downtime and zero data loss on failover. A shadow VM is created as a replica running on a second host, re-executing every piece of IO to keep the two VMs in lockstep. If one fails, there is seamless cutover and another VM is spawned so that it continues to be protected.
    • VMsafe APIs provide new always-on security offerings including vShield zones to maintain compliance without diverting non-compliant machines to a different network but zoning them within vSphere so that they can continue to run efficiently within the shared infrastructure whilst security compliance issues are addressed.
  • Choice:
    • An extensive hardware compatibility list.
    • 4x the number of operating systems supported as “the leading competitor”.
    • Dynamic provisioning.
    • Storage VMotion – the ability to move a VM between storage arrays in the same way as VMotion moves VM between hosts.

Packaging and pricing

It took 1000 VMware engineers, in 14 offices across the globe, three million engineering hours, to add 150 new features in the development of VMware vSphere 4.

VMware claim that vSphere is “The best platform for building cloud infrastructures”. But that’s exactly it – a platform for building the infrastructure. Something has to run on top of that infrastructure too! Nevertheless VMware does look to have a great new product set and features like vStorage thin provisioning, VMSafe APIs, Storage VMotion and Fault Tolerance are big steps forward. On the other hand, vSphere is still very expensive – at a time when IT budgets are being squeezed.

VMware vSphere 4 will be available in a number of product editions (Essentials, Essentials Plus, Standard, Advanced, Enterprise and Enterprise Plus) with per-CPU pricing starting at $166 and rising to $3495, not including the cost of vCenter for management of the infrastructure ($1495 to $4995 per instance) and a mandatory support subscription.

A comparison chart for the various product features is also available.

General availability of vSphere 4 is expected during the second quarter of 2009.

Enabling Adobe Lightroom 2 integration with Photoshop CS3 and later

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back, my friend Jeremy Hicks was demonstrating Adobe Photoshop Lightroom to me. Whilst I’m still not convinced that Lightroom is the answer to all my digital image editing requirements, it it great for managing my digital workflow – and it has the advantage of being integrated with my pixel editor of choice: Adobe Photoshop CS3.

Unfortunately I found that, whilst Lightroom’s Photo menu included Edit In options for Adobe Photoshop CS3, some of the options that would be useful when merging multiple exposures – Merge to Panorama, Merge to HDR and Open as Layers – were all greyed out (unavailable). After a bit of Internet research, I found that I needed to be running an updated version of Photoshop CS3 (v10.0.1) to enable the integration with Lightroom. Some people have also suggested that Lightroom needs to be at least v2.1 (I’m running v2.3).

After the upgrade, the options to in the words of a former Senior Marketing Manager for Professional Photography at Adobe, Frederick Van Johnson) (“leverage the power of Photoshop CS3 to do some of the more complex and niche ‘heavy-lifting’ imaging tasks, while still providing seamless access to the powerful organizational features in Lightroom 2” were available.

Not surpisingly, Lightroom v2.3 (and presumably earlier versions too) are perfectly happy to work with later versions of Photoshop, such as CS4 (v11).