Monthly Archives: April 2009

Uncategorized

VMware launches vSphere

Earlier today, VMware launched their latest virtualisation platformvSphere. vSphere is what was once known as Virtual Infrastructure (VI) 4 and, for those who are unfamiliar with the name, the idea is that virtual infrastructure is more of a description of what VMware’s products do than a product name and the company is trying to put forward the message that, after more than 10 years of virtualisation (originally just running multiple operating systems on a workstation, then on a hypervisor, then moving to a virtual infrastructure), this fourth generation of products will transform the datacentre into a private “cloud” in what they refer to as an evolutionary step but a revolutionary approach.
VMware vSphere

Most of the launch was presented by VMware President and CEO, Paul Maritz, who welcomed a succession of leaders from Cisco, Intel, and HP to the stage in what sometimes felt like a bizarre “who’s who in The Valley” networking event, but had to be ousted from the stage by his own CTO, Stephen Herrod, in order to keep the presentation on schedule! Later on, he was joined by Michael Dell before VMware Chairman, Joseph M. Tucci closed down the event.

VMware President and CEO, Paul Maritz, speaking at the vSphere 4 launchFor a man whose career included a 14-year spell at Microsoft, where he was regarded as the number 3 behind Bill Gates and Steve Ballmer, it seemed odd to me how Maritz referred to the Redmond giant several times during his presentation but never by name – always as a remark about relative capabilities which seemed to indicate that, far from being a market leader that is comfortable with its portfolio, VMware is actually starting to regard Microsoft (and presumably Citrix too) as credible competition – not to be disregarded but certainly not to be named in the launch of a new version of their premier product!

Maritz also seemed to take a dig at some other vendors, as he cited IBM as claiming to have invented the hypervisor [my colleagues from ICL may disagree] but not having realised the potential, and referring to some clouds as “the ultimate Californian hotels” where you can check in but not check out as they are highly proprietary computing systems. I can only imagine that he’s referring to offerings for Amazon and Google here, as Microsoft’s Windows Azure is built on the same infrastructure that runs in on-premise datacentres – Windows Server and the .NET Framework, extended for the cloud – in just the same way that vSphere is VMware’s cloud operating system – be that internal, external or a hybrid and elastic cloud which spans on premise and service-based computing paradigms.

It’s all about the cloud

So, ignoring the politics, what is vSphere about? VMware view vSphere as a better way to build computing platforms, from small and medium businesses (SMBs) to the cloud. Maritz explained that “the cloud” is a useful shorthand for the important attributes of this platform, built from industry standard components (referring to cloud offerings from Amazon, Google, and “other” vendors – Microsoft then!), offering scalable, on-demand, flexible, well-managed, lights-out configurations. Whilst today’s datacentres are seen by VMware as pillars of complexity (albeit secure and well-understood complexity), VMware see the need for something evolutionary, something that remains secure and open – and they want to provide the bridge between datacentre and cloud, severing complex links, jacking up the software to separate it from the hardware and slide in new level of software (vSphere), whereby the applications, middleware and operating system see an aggregated pool of hardware as a single giant computing resource. Not just single machines but an evolutionary roadmap from today’s datacentres. A platform. An ecosystem. One which offers: compute; storage; network, security, availability, and management capabilities, extendable by partners.

If you take away the marketing rhetoric, VMware’s vision of the future is not dissimilar to Microsoft’s. Both see devices becoming less significant as the focus shifts towards the end user. Both have a vision for a cloud-centric services, backed up with on-premise computing where business requirements demand it. And both seem to believe the same analysts that say 70% of IT budgets today are spent on things that do not differentiate businesses from their competition.

Of course, VMware claims to be further ahead than their competition. That’s no surprise – but both VMware and Microsoft plan to bring their cloud offerings to fruition within the next six months (whilst VMware have announced product availability for vSphere, they haven’t said when their vCloud service provider partners will be ready; although Windows Azure’s availability will be an iterative approach and the initial offering will not include all of the eventual capabilities). And, whilst vSphere has some cool new features that further differentiate it from the Microsoft and Citrix virtualisation offerings, that particular technology gap is closing too.

Not just for the enterprise

Whilst VMware aim to revolutionalise the “plumbing”, they also claim that the advanced features make their solutions applicable to the low end of the market, announcing an Essentials product to provide “always on IT in a box”, using a small vSphere configuration with just a few servers, priced from $166 per CPU (or $995 for 3 servers)

Clients – not desktops

For the last year or so, VMware have been pushing VDI as an approach and, in some environments, that seems to be making some traction. Moving away from desktops and focusing on people rather than devices, VDI has become VMware View, part of the vClient initiative which takes the “desktop” into the cloud.

Some great new features

If Maritz’s clumsy Eagles references weren’t bad enough, Stephen Herrod’s section included a truly awful video with a gold disc delivered in Olympic relay style and “additional security” for the demo that featured “the presidential Blackberry”. It was truly cringe-worthy but Herrod did at least show off some of the technology as he talked through the efficiency, control, and choice marketing message:

  • Efficiency:
    • The ability to handle:
      • 2x the number of virtual processors per virtual machine (up from 4 to 8).
      • 2.5x more virtual NICs per virtual machine (up from 4 to 10).
      • 4x more memory per virtual machine (up from 64 GB to 255GB).
      • 3x increase in network throughput (up from 9 Gb/s to 30Gb/s).
      • 3x increase in the maximum recorded IOPS (up to over 300,000).
    • The ability to create vSphere clusters to build a giant computer with up to
      • 32 hosts.
      • 2048 cores.
      • 1280 VMs.
      • 3 million IOPS.
      • 32TB RAM.
      • 16PB storage.
    • vStorage thin provisioning – saving up to 50% storage through data de-duplication.
    • Distributed power management – resulting in 50% power savings during VMmark testing and allowing servers to be turned on/off without affecting SLAs. Just moving from VI3 to vSphere 4 should be expected to result in a 20% saving.
  • Control:
    • Host profiles make the giant computer easy to extend and scale with desired configuration management functionality.
    • Fault tolerance for zero downtime and zero data loss on failover. A shadow VM is created as a replica running on a second host, re-executing every piece of IO to keep the two VMs in lockstep. If one fails, there is seamless cutover and another VM is spawned so that it continues to be protected.
    • VMsafe APIs provide new always-on security offerings including vShield zones to maintain compliance without diverting non-compliant machines to a different network but zoning them within vSphere so that they can continue to run efficiently within the shared infrastructure whilst security compliance issues are addressed.
  • Choice:
    • An extensive hardware compatibility list.
    • 4x the number of operating systems supported as “the leading competitor”.
    • Dynamic provisioning.
    • Storage VMotion – the ability to move a VM between storage arrays in the same way as VMotion moves VM between hosts.

Packaging and pricing

It took 1000 VMware engineers, in 14 offices across the globe, three million engineering hours, to add 150 new features in the development of VMware vSphere 4.

VMware claim that vSphere is “The best platform for building cloud infrastructures”. But that’s exactly it – a platform for building the infrastructure. Something has to run on top of that infrastructure too! Nevertheless VMware does look to have a great new product set and features like vStorage thin provisioning, VMSafe APIs, Storage VMotion and Fault Tolerance are big steps forward. On the other hand, vSphere is still very expensive – at a time when IT budgets are being squeezed.

VMware vSphere 4 will be available in a number of product editions (Essentials, Essentials Plus, Standard, Advanced, Enterprise and Enterprise Plus) with per-CPU pricing starting at $166 and rising to $3495, not including the cost of vCenter for management of the infrastructure ($1495 to $4995 per instance) and a mandatory support subscription.

A comparison chart for the various product features is also available.

General availability of vSphere 4 is expected during the second quarter of 2009.

Uncategorized

Enabling Adobe Lightroom 2 integration with Photoshop CS3 and later

A couple of weeks back, my friend Jeremy Hicks was demonstrating Adobe Photoshop Lightroom to me. Whilst I’m still not convinced that Lightroom is the answer to all my digital image editing requirements, it it great for managing my digital workflow – and it has the advantage of being integrated with my pixel editor of choice: Adobe Photoshop CS3.

Unfortunately I found that, whilst Lightroom’s Photo menu included Edit In options for Adobe Photoshop CS3, some of the options that would be useful when merging multiple exposures – Merge to Panorama, Merge to HDR and Open as Layers – were all greyed out (unavailable). After a bit of Internet research, I found that I needed to be running an updated version of Photoshop CS3 (v10.0.1) to enable the integration with Lightroom. Some people have also suggested that Lightroom needs to be at least v2.1 (I’m running v2.3).

After the upgrade, the options to in the words of a former Senior Marketing Manager for Professional Photography at Adobe, Frederick Van Johnson) (“leverage the power of Photoshop CS3 to do some of the more complex and niche ‘heavy-lifting’ imaging tasks, while still providing seamless access to the powerful organizational features in Lightroom 2″ were available.

Not surpisingly, Lightroom v2.3 (and presumably earlier versions too) are perfectly happy to work with later versions of Photoshop, such as CS4 (v11).

Technology

Microsoft Virtualization: the R2 wave

The fourth Microsoft Virtualisation User Group (MVUG) meeting took place last night and Microsoft’s Matt McSpirit presented a session on the R2 wave of virtualisation products. I’ve written previously about some of the things to expect in Windows Server 2008 R2 but Matt’s presentation was specifically related to virtualisation and there are some cool things to look forward to.

Hyper-V in Windows Server 2008 R2

At last night’s event, Matt asked the UK User Group what they saw as the main limitations in the original Hyper-V release and the four main ones were:

  • USB device support
  • Dynamic memory management (ballooning)
  • Live Migration
  • 1 VM per storage LUN

Hyper-V R2 does not address all of these (regardless of feedback, the product group is still unconvinced about the need for USB device support… and dynamic memory was pulled from the beta – it’s unclear whether it will make it back in before release) but live migration is in and Windows finally gets a clustered file system in the 2008 R2 release.

So, starting out with clustering – a few points to note:

  • For the easiest support path, look for cluster solutions on the Windows Server Catalog that have been validated by Microsoft’s Failover Cluster Configuration Program (FCCP).
  • FCCP solutions are recommended by Microsoft but are not strictly required for support – as long as all the components (i.e. server and SAN) are certified for Windows Server 2008 – a failover clustering validation report will still be required though – FCCP provides another level of confidence.
  • When looking at cluster storage, fibre channel (FC) and iSCSI are the dominant SAN technologies. With 10Gbps Ethernet coming onstream, iSCSI looked ready to race ahead and has the advantage of using standard Ethernet hardware (which is why Dell bought EqualLogic and HP bought LeftHand Networks) but then Fibre Channel over Ethernet came onstream, which is potentially even faster (as outlined in a recent RunAs Radio podcast).

With a failover cluster, Hyper-V has always been able to offer high availability for unplanned outages – just as VMware do with their HA product (although Windows Server 2008 Enterprise or Datacenter Editions were required – Standard Edition does not include failover clustering).

For planned outages, quick migration offered the ability to pause a virtual machine and move it to another Hyper-V host but there was one significant downside of this. Because Microsoft didn’t have a clustered file system, each storage LUN could only be owned by one cluster node at a time (a “shared nothing” model). If several VMs were on the same LUN, all of them needed to be managed as a group so that they could be paused, the connectivity failed over, and then restarted, which slowed down transfer times and limited flexibility. The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access. Third party clustered file system solutions are available for Windows (e.g. Sanbolic Melio FS) but, as Rakesh Malhotra explains on his blog, these products have their limitations too.

Windows Server 2008 R2 Hyper-V can now provide Live Migration for planned failovers – so Microsoft finally has an alternative to VMware VMotion (at no additional cost). This is made possible because of the new clustered shared volume (CSV) feature with IO fault tolerance (dynamic IO) overcomes the limitations with the shared nothing model and allows up to 256TB per LUN, running on NTFS with no need for third party products. The VM is still stored on a shared storage volume and at the time of failover, memory is scanned for dirty pages whilst still running on the source cluster node. Using an iterative process of scanning memory for dirty pages and transferring them to the target node, the memory contents are transferred (over a dedicated network link) until there are so few that the last few pages may be sent and control passed to the target node in fraction of a second with no discernible downtime (including ARP table updates to maintain network connectivity).

Allowing multiple cluster nodes to access a shared LUN is as simple as marking the LUN as a CSV in the Failover Clustering MMC snap-in. Each node has a consistent namespace for LUNS so as many VMs as required my be stored on a CSV as need (although all nodes must use the same letter for the system drive – e.g. C:). Each CSV appears as an NTFS mount point, e.g. C:\ClusterStorage\Volume1
and even though the volume is only mounted on one node, distributed file access is co-ordinated through another node so that the VM can perform direct IO. Dynamic IO ensures that, if the SAN (or Ethernet) connection fails then IO is re-routed accordingly and if the owning node fails then volume ownership is redirected accordingly. CSV is based on two assumptions (that data read/write requests far outnumber metadata access/modification requests; and that concurrent multi-node cached access to files is not needed for files such as VHDs) and is optimised for Hyper-V.

At a technical level, CSVs:

  • Are implemented as a file system mini-filter driver, pinning files to prevent block allocation movement and tracking the logical-to-physical mapping information on a per-file basis, using this to perform direct reads/writes.
  • Enable all nodes to perform high performance direct reads/writes to all clustered storage and read/write IO performance to a volume is the same from any node.
  • Use SMB v2 connections for all namespace and file metadata operations (e.g. to create, open, delete or extend a file).
  • Need:
    • No special hardware requirements.
    • No special application requirements.
    • No file type restrictions.
    • No directory structure or depth limitations.
    • No special agents or additional installations.
    • No proprietary file system (using the well established NTFS).

Live migration and clustered storage are major improvements but other new features for Hyper-V R2 include:

  • 32 logical processor (core) support, up from 16 at RTM and 24 with a hotfix (to support 6-core CPUs) so that Hyper-V will now support up to 4 8-core CPUs (and I would expect this to be increased as multi-core CPUs continue to develop).
  • Core parking to allow more intelligent use of processor cores – putting them into a low power suspend state if the workload allows (configurable via group policy).
  • The ability to hot add/remove storage so that additional VHDs or pass through disks may be assigned to to running VMs if the guest OS supports supports the Hyper-V SCSI controller (which should cover most recent operating systems but not Windows XP 32-bit or 2000).
  • Second Level Address Translation (SLAT) to make use of new virtualisation technologies from Intel (Intel VT extended page tables) and AMD (AMD-V nested paging) – more details on these technologies can be found in Johan De Gelas’s hardware virtualisation article at AnandTech.
  • Boot from VHD – allowing virtual hard disks to be deployed to virtual or or physical machines.
  • Network improvements (jumbo frames to allow larger Ethernet frames and TCP offload for on-NIC TCP/IP processing).

Hyper-V Server

So that’s covered the Hyper-V role in Windows Server 2008 R2 but what about its baby brother – Hyper-V Server 2008 R2? The good news is that Hyper-V Server 2008 R2 will have the same capabilities as Hyper-V in Windows Server 2008 R2 Enterprise Edition (previously it was based on Standard Edition) to allow access to up to 1TB of memory, 32 logical cores, hot addition/removal of storage, and failover clustering (with clustered shared volumes and live migration). It’s also free, and requires no dedicated management product although it does need to be managed using the RSAT tools for Windows Server 2008 R2 of Windows 7 (Microsoft’s advice is never to manage an uplevel operating system from a downlevel client).

With all that for free, why would you buy Windows Server 2008 R2 as a virtualisation host? The answer is that Hyper-V Server does not include licenses for guest operating systems as Windows Server 2008 Standard, Enterprise and Datacenter Editions do; it is intended for running non-Windows workloads in a heterogeneous datacentre standardised on Microsoft virtualisation technologies.

Management

The final piece of the puzzle is management:

There are a couple of caveats to note: the SCVMM 2008 R2 features mentioned are in the beta – more can be expected at final release; and, based on previous experience when Hyper-V RTMed, there may be some incompatibilities between the beta of SCVMM and the release candidate of Windows Server Hyper-V R2 (expected to ship soon).

SCVMM 2008 R2 is not a free upgrade – but most customers will have purchased it as part of the Server Management Suite Enterprise (SMSE) and so will benefit from the two years of software assurance included within the SMSE pricing model.

Wrap-up

That’s about it for the R2 wave of Microsoft Virtualization – for the datacentre at least – but there’s a lot of improvements in the upcoming release. Sure, there are things that are missing (memory ballooning may not a good idea for server consolidation but it will be needed for any kind of scalability with VDI – and using RDP as a workaround for USB device support doesn’t always cut it) and I’m sure there will be a lot of noise about how VMware can do more with vSphere but, as I’ve said previously, VMware costs more too – and I’d rather have most of the functionality at a much lower price point (unless one or more of those extra features will make a significant difference to the business case). Of course there are other factors too – like maturity in the market – but Hyper-V is not far off its first anniversary and, other than a couple of networking issues on guests (which were fixed) I’ve not heard anyone complaining about it.

I’ll write more about Windows 7 and Windows Server 2008 R2 virtualisation options (i.e. client and server) as soon as I can but, based on a page which briefly appeared on the Microsoft website, the release candidate for is expected to ship next month and, after reading Paul Thurrott’s post about a forthcoming Windows 7 announcement, I have a theory (and that’s all it is right now) as to what a couple of the Windows 7 surprises may be…

Uncategorized

New features in Microsoft Exchange 2010

Exchange 2010 logoEarlier today, the Microsoft PR machine started the public build up to the release of a new version of Microsoft’s messaging product – Microsoft Exchange 2010 (formerly known as Exchange 14). Exchange 2010 is the first Microsoft product built both as a software product (Exchange Server) and as a service offering (Microsoft Exchange Online) – allowing for hybrid on-premise and cloud-based software plus services.

Microsoft’s marketing of this product is broken into three areas and I’ll stick with these as I highlight some of the new features and improvements in Exchange 2010:

  • Protection and compliance.
  • Anywhere access.
  • Flexibility and reliability.

Protection and compliance

Exchange Server 2007 brought new protection and compliance features including Exchange Hosted Services for virus and spam protection, message journalling, managed folders and mobile device security through Outlook Web Access (OWA). Exchange 2010 takes a step forward with new e-mail archiving capabilities, more powerful retention policies, automated rights management and a multi-mailbox search user interface.

Looking specifically at e-mail archival, Exchange 2010 allows current and historical mailbox data to be managed along with personal folders (.PSTs). PSTs can be dragged and dropped into the archive, retention policies can be applied (at both folder and item level – by an individual at the desktop or managed centrally using transport rules) and folders can be set to archive automatically. No longer do personal archives need to be spread around the enterprise on file shares, local hard disks or using third party archival products – personal archives are stored on the server whilst compliance and backup issues are addressed but users don’t need to learn about a new product.

There’s also a new legal hold feature which effectively marks existing mailbox data as read only but still allows a user to access their mailbox with any attempted modifications audited. Meanwhile, the role-based access control functionality in Exchange 2010 allows the creation of a compliance officer role with delegated access to a multi-mailbox search user interface, allowing human resources (HR) and legal access to data for e-discovery purposes, without IT administrator involvement – all within a familiar Outlook and OWA user interface.

In an increasingly connected society, organisations are looking to protect their intellectual property but the problem with many rights management solutions (including Windows Rights Management Services with Outlook 2007) is that they rely on users to mark information accordingly. Exchange 2010 includes automatic content-based protection with transport rules so that the hub transport server can apply RMS policies to e-mail and voicemail based on attributes, including scanning and indexing attachments. As well as the existing “do not forward” template there is a new “Internet confidential” template, to encrypt e-mail over the wire but still allow local saving and printing when it reaches the recipient.

Accidental leaks (for example, sending e-mail to the wrong person because Outlook’s nickname cache has identified the wrong recipient) can be mitigated with a new feature in Exchange 2010 known as MailTips, which will warn that e-mail is about to be sent to an external recipient.

Security remains important to the Exchange Server infrastructure and Microsoft Forefront Security for Exchange Server offers multiple anti-virus scan engines as well as tight integration with the hub transport, mailbox and client access server roles. In addition there is the option of a hosted filtering service in the cloud via Forefront Online Security for Exchange.

Anywhere Access

Exchange Server 2007 improved access from multiple devices (PC, web and phone) with a single Inbox for e-mail and voicemail, as well as an improved calendar experience. Exchange 2010 is intended to provide simplified Inbox navigation, enhanced voicemail with text preview and the ability to share calendar information across organisational boundaries.

Most users feel overloaded with e-mail and Exchange 2010′s conversation view is intended to consolidate individual e-mail items into conversations, regardless of the folders that messages were in (similar to the approach that Google takes in GMail). Filtering based on attributes without resorting e-mail allows for easier management – moving entire conversation threads, or even marking them to ignore the conversation going forward (e.g. in a mail storm caused by over-use of the Reply All feature).

MailTips can also be used to reducing unnecessary and undeliverable e-mail, flagging that a user does not have permission to send to a particular group, warning that they are sending information to a large distribution, that a recipient is out of the office, or that a contact group is moderated and message delivery may be delayed.

Building on the universal Inbox in Exchange Server 2007, Exchange 2010′s unified messaging functionality includes text transcription for voicemail – providing a preview in the message body. In addition, Outlook and OWA will also allow context-sensitive actions to be taken from the voicemail preview for faster e-mail triage – e.g. a phone number becomes an actionable link (right-click to call), where there is integration with other unified communications products.

In Exchange 2010, individual users can creating customised voicemail menus using a personalised auto-attendant to route calls accordingly and ensure that messages never go unanswered (just as we can manage e-mail with inbox rules).

From a client support perspective, Exchange 2010 is intended to support users running on a variety of devices whether they are desktop, web and mobile:

  • On the desktop, Outlook (2003 or later) and Entourage continue to be supported for Windows and Mac users respectively.
  • For the web, in Exchange 2010, OWA now offers full support for the major non-Microsoft browsers (Firefox and Safari).
  • Meanwhile, Exchange ActiveSync is becoming a de facto standard for mobile e-mail access with support from a broad number of partners, including Windows Mobile, Nokia, and even the Apple iPhone.

Windows Mobile (6.5) users gain additional functionality from Exchange 2010 with auto-completion of e-mail addresses, using a server-side cache, along with conversation view and voicemail preview.

Not only does the universal inbox in Exchange 2010 include e-mail, voicemail and SMS text messages but now it integrates with OCS to display presence information and allow the initiation of instant message conversations from within OWA.

Exchange 2010 also allows calendars to be shared with individuals outside the organisation, which is often critical to working with partners. Access is controlled by policy, managed centrally or defined by individual users through Outlook or OWA.

Flexibility and reliability

Exchange Server 2007 brought: improved installation and deployment with new Exchange Server roles; high availability improvements with various forms of continuous replication; and management improvements with a simplified management console and new PowerShell support for task automation. Exchange 2010 builds on this to allow organisations to use both on-premise and hosted services, with a single high availability and disaster recovery platform, together with role-based administration and end-user self service functionality.

In what will be a massive shift of for many organisations, Microsoft is encouraging Exchange 2010 customers to store mailbox data on inexpensive local disks and to replicate databases between servers rather than using SAN-based replication. The idea is that on-site (CCR) and off-site (SCR) replication technologies are combined into a single database availability group (DAG) framework, handling all clustering activities internally so there is no need to manage failover clustering separately with Windows Server. Up to 16 copies of each database may be provided and Exchange will switch between database copies automatically as required to maintain availability. In addition, clustered mailbox servers can also host other Exchange Server roles (client access or hub transport) so that full redundancy of Exchange services and data is available with two servers.

The advantage of this approach is simplified recoverability from a variety of failures – at disk, server, or datacentre level. It also allows for the limiting of end user disruption during mailbox moves and routine maintenance (important with larger mailboxes and longer move times) – users can remain online and connected whilst their mailbox is being moved.

Administration is simplified with the ability to delegate specific tasks to specific users based on a role-based access control system – for example compliance officer (for e-discovery), telephony specialist, human resources (e.g. update contact details), or service desk. This delegation also extends to end users and common tasks relating to distribution group management, message tracking and changes to contact information can be delegated to users through the new Exchange Control Panel (ECP) in Outlook and OWA, reducing support costs.

Storage options are also enhanced in Exchange 2010 as, whilst Exchange Server 2003 only supported SAN-based clusters for high availability and Exchange Server 2007 added direct attached (SAS) storage for clusters, Exchange 2010 includes support for direct attached (SATA) and JBOD (RAID-less) storage. Microsoft says this is possible due to a 70% reduction in input-output operations per second (IOPS) compared with Exchange Server 2007 (which itself was 70% down on 2003), meaning that more disks now reach the minimum performance requirements for Exchange Server. Because IO patterns are optimised to reduce bursts of disk writes and Exchange Server 2010 is more resilient to minor faults (it can automatically repair corrupted database pages using one of the database copies stored for HA purposes) desktop-class disks can be used. In addition, when at least 3 replicated database are in use, RAID can also be dispensed with (although I can’t see many organisations taking up this option as RAID is a standard server feature offering minimal server downtime and is not exactly expensive either).

According to Microsoft, Gartner has reported that 20% of all e-mail mailboxes will move to the cloud by 2012. In reality, there will be a mix between on-site and cloud-based services and Exchange 2010 is designed to allow on-premise, hosted, or hybrid deployment scenarios.

Conclusions and roadmap

For me, it seems that Exchange 2010 is not a major upgrade – just as 2003 was an incremental change built on 2000, 2010 builds on 2007 but, nevertheless, the improvements are significant. In a few weeks time it seems that the “dogfood” Exchange Server 2007 system I use for work will be switched off and I will revert to a corporate solution based on Exchange Server 2003. If I had just a few of the features in Exchange 2010, then my day would be more productive and the e-mail overload with which I and many colleagues struggle can be addressed (Microsoft claims that 25% of an information worker’s day is spent processing e-mail – and that would seem to match my personal experience). Exchange Server is now about far more than just e-mail – it’s the messaging infrastructure at the heart of many enterprises’ collaborative efforts and Exchange 2010 is shaping up to be a major step forward for end-user productivity.

So, when can we get this? Well, Exchange 2010 was announced earlier today (although the Exchange Server team leaked its own secret last night). The final release of Exchange Server 2010 is expected in the second half of 2009 (and it will only run on 64-bit versions of Windows Server 2008 and later) and Exchange Online Services will move to Exchange 2010 in due course. In the meantime, the beta is available for download now.

Technology

Interact 2009

Interact 2009I spent yesterday at Microsoft’s Interact 2009 event, which was a fantastic opportunity to meet with representatives from the Exchange Server and Office Communications Server groups at Microsoft as well as to network with MVPs, key customers and other people that Microsoft considers influential in the Unified Communications (UC) space.Delegates network at Interact 2009 (UK event) Through 7 hours of workshops, a variety of topics were covered (some live, some via video links) providing feedback to Microsoft on product direction as well as receiving guidance on implement the technologies.

Yours truly at the at Interact 2009 (UK event)Those who know me of old (long before the days of blogging) will remember the youthful consultant who used to know a fair amount about Active Directory and Exchange Server. These days I’m more of a generalist (with less hair, slowly turning grey) but I still enjoy going back to my messaging roots and Interact allowed me to bring myself up to speed around the upcoming release of Exchange Server and the current release of Office Communications Server (OCS).

Today, is the day when Microsoft will officially announce Exchange Server 2010 (formerly known just by its version number – Exchange 14), along with general availability of the beta and, time-permitting, I hope to write a few posts over the coming weeks with a general UC (Exchange and OCS) focus, starting out with an overview of the new features in Exchange Server 2010.

PubWorld at Interact 2009 (UK event)Finally, for this post, I thought I’d share some pictures from yesterday evening’s event in Reading (which, along with the other pictures in this post were supplied by Microsoft UK courtesy of Eileen Brown). I don’t know what was planned for Redmond and Boston but, over here, one of the meeting rooms in building 3PubWorld at Interact 2009 (UK event) on the Microsoft UK Campus was converted to a “traditional English pub”. We had a bar serving warm beer (in the form of bottles of London Pride), which caused some confusion for at least one senior ‘softie from “Corp” (there was chilled lager available too, as well as wine and a selection of soft drinks!), as well as a simulated fireplace, a darts board, various items of pub paraphernalia, picnic tables on a “terrace” outside and also some modern accompaniments – such as Xbox 360 kiosks, Air Hockey and Table Football – with a 1950′s jukebox thrown in for good measure!

Uncategorized

The sun sets on Windows XP, Office 2003 and Windows Server 2003 SP1 – Vista SP1 and XP SP3 will soon be unblocked – Office 2007 SP2 to ship at the end of April – Office “14″ is given a name (and will be available in both 32- and 64-bit versions)

Just in case you hadn’t noticed, today is the day that Windows XP and Office 2003 end their mainstream support phase and move onto extended support – it’s security hotfixes only from now on, unless you are prepared to pay. Security updates for Windows XP will continue to be issued via Windows Update until 8 April 2014 (ditto for Office 2003).

Whilst XP has enjoyed a longer period of mainstream support than would otherwise have been the case (due to the time it took for Microsoft to ship its successor – Windows Vista), many organisations have held back on upgrades due to the negative press that Vista has received (which may have been partially warranted at release but is not exactly valid in 2009). Regardless of the perception of Windows Vista, Windows Vista R2 [ahem] Windows 7 is receiving plenty of praise and a release candidate is widely expected to be released within the next few weeks. Those considering an eventual move to Windows 7 could prepare themselves by application testing on the beta – or even using Windows Vista as a limited pilot in preparation (the two operating systems are remarkably similar, for all but those applications that need to work at a very deep level – e.g. anti virus and VPN software).

Meanwhile, reaction to Office 2007′s “ribbon” user interface has been mixed; regardless of the many improvements in Office applications with the 2007 release, many users are still using the same basic word processing features they had in earlier versions (dare I say as far back as Word for Windows 2.0) and so organisations are needing further persuasion to upgrade – for some the business case is only justified through integration with Office server products (such as Office SharePoint Server 2007 and Office Communications Server 2007 R2) although, my personal experience of reverting to Office 2003 for a few weeks whilst my laptop was being repaired was not a happy one… it’s amazing how those “little tweaks” become embedded in your way of working after a while.

As for Windows Server, those organisations still running Windows Server 2003 (including R2) with service pack 1 lose their support today – Windows Server 2003 with service pack 2 will continue to receive mainstream support until 13 July 2010 with extended support ending on 14 July 2015.

On the subject of service packs, now is probably a good time to remind people that the service pack blocking tools for Windows Vista SP1 and Windows XP SP3 will expire on 28 April 2009 and 19 May 2009 respectively, after which time the updates will be automatically delivered via Windows Update. As for Office updates, Office 2007 service pack 2 is due for release on 28 April 2009 (including support for ODF, PDF and XPS document formats).

Looking ahead to the next release of Microsoft Office, various websites are reporting that Office codenamed “14″ has been named as… no surprise here… Office 2010. APC’s David Flynn is citing delays that prevent a tandem launch with Windows 7 but I have no recollection of any announcements by Microsoft for either a joint launch or a 2009 release – and they have not even committed publicly to releasing Windows 7 this year (it’s all pure speculation by journalists, bloggers and other pundits)… how can something slip if it’s not even been formally announced? Meanwhile ARStechnica is concentrating on the availability of both 32-bit and 64-bit versions of Office 2010.

The halycon days of wholesale PC refreshes every few years may now be just a distant memory but these changes signal the need for IT departments to seriously consider their options. With Office 2010 expected to include web applications based on a monthly subscription charge, increasingly feasible online services, and desktop virtualisation becoming increasingly viable (in various forms – be that full VDI, a managed virtual desktop running on a traditional PC, or a hybrid solution using something like MED-V), these are interesting times for those given the task of providing a corporate desktop.

Uncategorized

Looking for a power supply for your laptop?

A few weeks back, I saw James O’Neill demonstrating Windows Server 2008 R2, using multiple notebook computers for his demo.

Anyone who regularly travels with several laptops will know that they soon become heavy and one way to save weight is to reduce the number of power supplies used. Unfortunately, there seems to be very little standardisation in notebook computer power supplies, so this is not always possible (even with two models from the same OEM).

James has found a workaround though – he has a switchable universal laptop power adapter and a global travel cable set, which meant he could get away with a single power supply (and some judicious juggling between PCs) to reduce the overall amount of equipment to be transported.

On a related note, I recently had to buy a second power adapter for the Dell Inspiron 1525 that we gave to my parents-in-law late last year. I couldn’t find the part on the Dell website but I an “online chat” led me to a parts department, who would charge me £34.50 for one. By shopping around, I was able to get a genuine Dell power supply (PA12) from a marketplace seller on Amazon (also available on eBay – Essex Laptops), delivered next day for around £16 including shipping and a power cable (which, admittedly was not a Dell part). Definitely worth shopping around!

Uncategorized

A guide to creating digital scans from 35mm film with a Nikon Coolscan 4000 ED

I started writing this post in March 2006, so why did it take more than three years to complete? Well, it was partly a lack of time – scanning is a time-consuming process and it’s only in the last few weeks that I think I’ve finally cracked the image-scanning process. In the meantime I’ve considered outsourcing the job of scanning my old negatives and slides but just can’t bring myself to use an overseas service for something so precious (anyhow, the service I’ve heard good things about – Scan Café – is not available in the UK). Anyway, here goes, my long-overdue guide to creating digital scans from 35mm film.

Nikon Super Coolscan 4000 EDA few years ago, I was advised that, rather than switch to digital photography, I should switch to transparency film and buy a decent scanner. I did exactly that, but it took me some time to save up for the scanner, by which time I had a mountain of film to scan. A year or so later, 6 megapixel digital SLRs had become affordable and I switched to digital capture, keeping my film body as a backup although it’s hardly seen the light of day because the digital format provides me with so much freedom.

I’d still like to scan those slides (especially as many of them are from my honeymoon) but it’s a time-consuming process and, up until now, the odd frame that I have scanned has left me unimpressed with the quality. Even though 4000PPI is a lot to ask of any device, I couldn’t believe that a Nikon Super Coolscan 4000 ED (which cost me just short of £1000 at the time of purchase) would offer poor quality scans and after contacting the Nikon European Customer Support desk I found that it the scanner is actually excellent – it’s just that the user (i.e. me) didn’t really know what he was doing!

After Nikon had set me in the right direction, I googled a bit, read the NikonScan user manual, and learned lots. This post is a quick summary of how to get the best from a (Nikon) film scanner with Nikon’s NikonScan 4 software. Most of this information is lifted from the user manual but a short-ish article on the web should be quicker to read than a 150 page PDF.

In common with many other film scanners, the 4000 ED offers Digital ICE3 technology. Actually, this is three technologies from Applied Science Fiction that will increase scanning times significantly; however they will also affect the quality of the resulting images:

  • Image correction and enhancement (ICE) is about removing dust and scratches – even clean negatives tend to show up dust when scanned at such high resolution, so I always use ICE on its fine setting (even though this has a slight effect on the overall sharpness of the image).
  • Restoration of Colour (ROC) is used to restore color on old/faded images (I may try it on the foll of film I dropped in icy water whilst helihiking on a glacier!).
  • Grain Equalisation and Management (GEM) is used to remove the signs of grain on an image.

For most of the applications that I use, the default settings are fine and my initial scans used the defaults. That was the mistake I made with my image scanning and is the reason I’m writing this post! From my support call with Nikon, I learned that I don’t necessarily need to use all of the ICE3 technologies together – in fact, it was the use of ROC and GEM at their default levels (5 and 3 respectively) that was causing my scans to look so bad. I also chose to turn off the Unsharp Mask (it can always be applied later with a pixel editing application such as Adobe Photoshop if required) – similarly I would ignore changes to the curves, LCH and colour balance (although Nikon’s advice to reduce the blue channel by 0.5EV on my sample problem image made a huge difference).

Something else that’s worth noting is to zoom in to the required depth before generating a preview scan, as zooming after performing the preview will appear pixelated (and no zoom will do little to show the effects of the digital processing). It’s also worth ensuring that automatic exposure is selected within the software preferences. Note that the warning symbol is displayed if changes are made to the settings after the preview scan is performed.

Multisampling is another option that can increase scan times dramatically but which should also increase quality as the scanner makes multiple passes over the film. The idea is that it reduces electronic noise (the real values should average out, whereas the noise will be more random), rendering a more accurate image with smoother tonal changes. Personally, I’ve had great results using Noise Ninja as a Photoshop plugin instead – it’s faster and it works on my digital images too!

The boundary offset is a setting used with strip film to line up individual frames within the scanner..

The NikonScan application is able to scan images in either 8- or 14-bit colour although 14-bit images are actually converted to 16-bit for editing purposes. I wrote about this in a separate post but basically each channel is recorded separately for each pixel with 256 levels on an 8-bit scan and 16384 levels for a 14-bit scan (which actually requires 2 bytes for each pixel), so for an RGB (red, green, blue) scan:

  • 1 byte (i.e. 8 bits) x 3 channels (red, green and blue) x 5959 x 3946 (pixel count) = 70542642 bytes (67.3MB).
  • 2 bytes (i.e. 16 bits) x 3 channels (red, green and blue) x 5959 x 3946 (pixel count) = 141085284 bytes (134.5MB).

For CYMK (cyan, yellow, magenta, black), this would increase to 4 channels, so file sizes will be one third larger. Of course, these figures assume an uncompressed file, and does not take into account any overheads of the file type – Nikon Scan supports RAW (NEF) (read-only), JPEG (JFIF), JPEG (EXIF-compliant), TIFF, BMP (Windows) or PICT (Macintosh).

When considering the required resolution for a scan, it’s worth knowing that pixels per inch (PPI), is not equal to dots per inch (DPI). In fact, the scan quality will depend on the output device:

  • Commercial (dye sublimation) printers use continuous halftone, measured in lines per inch (LPI). For this, the artwork PPI needs to be set at twice the LPI of the output device.
  • Inkjet printers use something called simulated halftone – 240DPI should be adequate quality for most prints. A 4000PPI scan from a 35mm negative (actually 24x36mm) on my scanner is 5959 x 3946 pixels. Printing at 240DPI will allow print sizes of around 24.8″ x 16.4″.
  • Window PCs display monitor graphics at 96PPI; Macs display at 72PPI.

It’s also important to understand the way in which a computer represents the colour in an image. This is all controlled with colour profiles, which I still find a little confusing but, thankfully, NikonScan handles for me using the Nikon colour management system (CMS). Some points that are worth noting though:

  • Windows PCs use a gamma value of 2.2; Macs use 1.8.
  • Gamut is a term used to describe the range of colours that are displayed – a narrow gamut will be vivid with saturated colours, whereas a wide gamut may appear low contrast and flat.
  • Each monitor is generally provided with a monitor profile (or one can be created in software).
  • An RGB profile also needs to be set (e.g. Color Match RGB on a Mac, or Adobe RGB on Windows).
  • The CYMK profile can be left at the factory default setting if the output will be printed on a variety of printers, or a custom CYMK profile may be provided with a specific printer. Any other imaging applications used (e.g. Photoshop) should also be set to match the colour profile as it is not passed between applications.

Further reading

The following links provide additional information that may help to produce good scans:

Uncategorized

How much data is really captured in a digital image?

A few weeks back, I dusted off my Nikon 4000 ED film scanner and scanned some film for some competition entries. I was pretty impressed with the results (once I’d worked out the best scan settings to use) but confused by the file sizes.

According to Digital Photography Review, my Nikon D70 has a 6.0 million effective pixels from a total of 6.3 sensor sites. At the largest setting, each image is 2000×3008 pixels (around 6016000 bytes or 5.7MB). There’s not an exact match between pixel count and image size (my raw files vary slightly in size but are each around 5.3MB) but we can work on a rule of thumb where each pixel accounts for around 1 byte of uncompressed image data.

Nikon Scan showing a file size of 65.3MB with a 3946x5782 imageWith the scanner though, things were different: the scan size for a 35mm frame was 5959 x 3946 pixels (around 23514214 bytes or 22.4MB), but the scan sizes reported by my scanning software were 67.3MB for an 8 bit scan and 134.5MB for a 14-bit scan. I could see that a 14-bit scan would actually use 16-bits (2 bytes) for each pixel but why were the file sizes three times the size they would be for a digital camera sensor?

After a lengthy discussion with Nikon’s European Customer Support team, I found that, whereas the Bayer mask on the digital camera limited each pixel to one colour (red, green or blue) – and software may be used to interpolate more values if required – the scanner actually captures three colour values (red, green and blue) for each pixel (instead of measuring the light falling on photo sensor sites and using a mask for the various colours, it shines red, green or blue lights through the film to measure the resulting values for each colour in turn).

On that basis 5959 x 3946 pixels x 3 channels = 70542642 bytes or 67.3MB in 8-bit mode (twice that in 14-bit mode) and the scanning software values suddenly make sense.

Motoring Technology

More on integrating an Apple iPhone 3G with Audi’s telephony and audio systems

A few days ago, it was my birthday. Whilst 37 is not a particularly significant age to celebrate (I prefer to think of it as the 16th anniversary of my 21st birthday), I did get a little present at the start of the month (hopefully it wasn’t an April fool’s joke) when my new company car was delivered. Bye bye Saab (I liked you at first but you soon showed yourself to be a Vauxhall Vectra in disguise… with aftersales service to match…) – this time I’ve gone down the German route and plumped for an Audi A4 Avant S-Line. I have to say that, even though it’s still early days, this could shape up to be one of the best cars I’ve ever driven (especially with the extra toys I’ve added to the spec) – mind you, I’ve always liked German cars and have bought a few Volkswagens over the years.

Don’t worry – I’m not going to start writing car reviews – but I did write something a few months ago about integrating an Apple iPhone 3G with Audi’s telephony and audio systems and I wanted to write a follow-up, now that I’ve had some opportunity to spend a bit more time with a suitably equipped car.

First up, telephony integration. This is simple, as long as the car has the Mobile Telephone Preparation Low option. No cradle is required as the mobile phone preparation provides Bluetooth connectivity. As I wrote in my earlier post, just pair the iPhone with the car using the code 1234 within 30 seconds of opening the car and inserting the key (i.e. activating the car’s systems). The handsfree device will be something like Audi UHV 0000, although the number will vary and, once paired, calls will ring the iPhone and the car simultaneously. The Bluetooth logo and signal strength are displayed on the Audi Multi Media Interface (MMI) display:

Audi telephone connection (MMI)Audi telephone connection (Driver Display)

My iPhone 3G is running software version 2.2.1 and I seem to have no difficulties accessing the phone’s number lists and directory (although voice activation/control is not availablethe phonebook that this refers to is the voice tag system, not the directory accessed on the phone over Bluetooth):

Audi accessing iPhone phonebook (Driver Display)

One thing to note – the car can only act as a handsfree for one phone at a time (although it can pair with up to 4 devices). When I’m “on the clock”, I turn off the Bluetooth on my iPhone so that the Nokia 6021 I use for work can access the car systems.

If you’re still having trouble, Audi provides a Bluetooth FAQ as well as a PDF with details of supported handsets (which is now over a year old and so does not include the iPhone 3G, although it appears to work).

Because Apple has not provided Advanced Audio Distribution Profile (A2DP) functionality on the current iPhone 3G or the first-generation iPhone, to integrate my iPhone with the music system so that I can access the phone’s playlists, etc., I needed to specify the Audi Music Interface option and buy an AMI iPod cable for £29. I think there is a minimum requirement on the sound system for this too (mine is the Audi Concert system).

The AMI is in the glovebox (close enough for a Bluetooth signal for the phone to carry on working) and the cable will charge my phone at the same time. The only problem is that the iPhone complains that the AMI is not a supported accessory and wants to go into airplane mode. If I tell it not to, the AMI will usually find the iPhone and let me navigate the playlists, etc. but I have found it seems to work better if I put start the iPod application on the iPhone before connecting:

This accessory is not made to work with iPhoneAccessory connected
Audi AMI access to iPhone playlists (MMI)Audi AMI access to iPhone playlists (Driver Display)

The good news is that the forthcoming iPhone 3.0 software is expected to include A2DP (and it should work with the iPhone 3G – but not the original iPhone), after which I should be able to stop using the cable (although I may just leave an old iPod semi-permanently connected to the car at that point).

[Update 12 December 2011: Even though iOS is now at v5.0.1, I've been unable to use A2DP. This worked in another Audi I drove recently so I assume the car needs a software update too.  This information from an AudiForums thread might be useful too:

"First, the difference between AMI and MMI, which threw me off, so hopefully someone else will find this helpful. This is for my 2011 A4... I don't know what other years/models it may apply to.

  • MMI (Multi-Media Interface) is just the screen/knob system that controls the radio/sat/cd/settings/etc.
  • AMI (Audi Music Interface) is the link between the MMI system and your iPod or other MP3 device. It is a port in the glove box that you can attach different cables to for different music devices."]
%d bloggers like this: