Force-ejecting a stuck CD or DVD on a Mac

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I was trying to eject a CD from my MacBook but it wasn’t playing ball. There doesn’t appear to be a hole to force-eject the disk and the media eject key wasn’t doing anything. Neither was there an icon to drag to the trash so, according to Apple’s advice, I needed to reboot the computer.

Er… no… this is 2011 – reboots should be a last resort (even my Windows PC only get rebooted to apply updates once a month). Thankfully MacRumors has a much more extensive list of solutions to force-eject stuck CDs/DVDs.

drutil tray eject did the trick for me although why Apple can’t direct people to the command line in their own advice is anybody’s guess…

The hardest thing about choosing my new car? The colour!

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago, I received the paperwork to replace my company car. My car scheme gives me the option of taking an allowance or leasing a car via the company and I’ve always chosen the latter option – there may be other ways to save some money but it’s probably not that far off the mark, it’s not my problem when things go wrong and it doesn’t leave me committed to payments on a car if I lose my job (not that I plan to… but you never know in the current economic climate).

I really like the Audi A4 Avant S-Line I drive at the moment so I considered getting another one – until I found that the same car with the same specification was going to cost me considerably more money (partly due to price increases and partly, I think, due to Lloyds TSB Autolease being sold to Lex). I also liked the idea of a Q5, but would have had to drop from the S-Line to an SE in order to stay within my budget, effectively placing Audi out of reach for me. So I looked at BMW and even test drove a 318d Sport Plus Touring (which is very tax-efficient due to its 120g COemissions). Unfortunately, for all its many qualities, the 3 Series failed to inspire and its orange dashboard felt like I was being transported back to the 1980s.

With all of the favourites out of the running, I started to thinking about other options and it seemed that the choices to suit my lifestyle came down to mid-powered diesel estate car (wagon for US and Australian readers), MPV (I think the Americans call these minivans) or an SUV.

A few years ago, one of my friends suggested that men who drove MPVs had given up on life. Clearly he gave up before me (last time I saw him he had sold his Porsche 911 and the family car was a Ford S-Max) but that has stuck in my mind, particularly as I approach my 40th anniversary on this planet.  After a succession of estate cars (Passat, Saab 9-3, A4), it seemed like my time had come but when Mrs W. and I tested a Touran (with Audi off the table, Volkswagen was next in line) she didn’t really like it. Result – no MPV for me! We also test drove a Passat but, even though it’s a great car, the days of having to fill its cavernous boot with pushchairs and assorted baby/toddler paraphernalia are, thankfully, behind us and it would be  just a little too long with a 4-bike cycle carrier on the back.  Then I saw the Tiguan.

Built on the Golf’s PQ35 platform, I thought the Tiguan would be too small for my family but the high driving position means you sit up, rather than back, which means more leg room in the back. Combined with rear seats that slide forward to give a choice between leg room and boot space (for camping trips, holidays, etc.), it seemed like it could work well for us (at 470 litres, the boot is larger than a Golf’s 350 and only marginally smaller than my A4 Avant’s 480, but a more practical shape).

Leasing the Tiguan will involve “topping up” my monthly allowance and so I looked at the Skoda Yeti as a less expensive alternative, except that it was missing some options that I find useful (like iPod integration). I also considered the Audi Q3 but there are none in the UK yet so I would have been ordering “blind” and the brochure indicates a body shape that leaves too little boot space. I was pretty sure that the Tiguan was the right choice and a couple of weeks ago I had one on a 72 hour test. We all loved it so I decided to order one.

Unfortunately it wasn’t quite that simple. Most of the options were straightforward (this is the configuration I went for; sadly there is no R-Line Tiguan at the moment) but I was stumped on the engine choice and the colour.

Engine first and, contrary to popular belief, SUVs do not have to be gas-guzzling monsters. I was tempted to go for the 2.0 TDI 140PS model with BlueMotion technology, but my Audi A4 has a 170PS  variant so I’d be looking at quite a drop in power (20% lower output and 10% less torque), combined with a 50% heavier car. If all my driving was on motorways that wouldn’t be too much of an issue but I live in the sticks and being able to overtake safely on rural roads is an important consideration.  I got in touch with Volkswagen and they told me that a local dealer had a 140PS version that I could test so I arranged to drive it, only to check the sticker next to the spare wheel and find that it was actually a 125kW version (i.e. 170PS).

A friend told me about an £89 “economy tuning chip box” that can be fitted to take the power from 140PS to 165PS and I have to admit I was tempted, but  I didn’t really want to make unauthorised modifications to my company car (I figured that could get me into hot water). So, with no opportunity to drive the low-power diesel, I decided to played it safe and to take the tax hit on the 170PS version – vowing to walk/cycle a little more often instead of driving… (had it been my own car, I would have gone for the 140PS and the box of tricks).

That left the colour. I didn’t want to pay for metallic or pearlescent paint but there are only two solid paint options on the Tiguan in the UK. Mrs W doesn’t like “Candy White” so that left “Deep Ocean Blue”,  for which Volkswagen didn’t have a swatch.  Brochures and websites are no good for colour matching (even the pictures in the brochures are computer generated these days) so I spent hours on the ‘net one evening last week searching for Volkswagen Deep Ocean Blue cars…

I found that Deep Ocean Blue has a colour code of LA5H but I couldn’t find any examples (except the same colour code, called Blue Lagoon, on a 2001 Jetta). After about 4 hours of searching I found a Deep Ocean Blue Touran for sale at an Audi dealership in Germany… and was not convinced.  With minutes to go before the end of the month, and fearing a manufacturer price increase, I decided to pay for metallic paint (Night Blue) and placed my order anyway.

There’s a 5-6 month wait for it to be built (Volkswagen seems to have particular delays on 2.0 TDI engines right now) but I’m looking forward to taking delivery in the spring… in fact, it should arrive just about in time for my birthday…

[Update 20 November 2011: I finally found a swatch for “Deep Ocean Blue” and it’s not the colour in the links above… it’s not too bad actually (the website colour is not far off) – probably best thought of as 1970s British Rail blue…]

Computer Weekly Social Media Awards 2011

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This time last year I was pestering blog readers and Twitter followers to vote for markwilson.it in the Computer Weekly IT Blog awards and I was surprised (and absolutely stoked) to win the Individual IT Professional (Male) category.  This time around I haven’t entered as an individual but I do have a favour to ask…

As part of my day job last year, I launched Fujitsu’s blog platform for the UK and Ireland. Although I handed the platform over to our marketing teams following incubation, the CTO Blog is still my baby and I edit most of the content (although I do try to ensure it’s written by others).

One year on, I’m pleased to say that our CTO Blog has been shortlisted for what is now known as the Computer Weekly Social Media Awards and I’d like to ask for your support again:

  1. If you don’t currently read the blog, please check it out.
  2. And if you like what you see, .

It’d be pretty cool to win an award again, and a great finish to the year for me at work…

Office 365 password resets… and disabling password expiry

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My Office 365 account password expired today and, somewhere in the midst of the password reset I managed to lock myself out.  As I only have one mailbox on the account (i.e. I am the administrator), that’s a bit of a problem…

I tried creating a service request to reset my password but I’m not sure it worked – I had no call-back and when I checked later in the Administrator control panel, there were no requests listed; however Dhaval Brahmbhatt (@DhavalBrahmbhat) gave me some UK phone numbers to try (0203 450 6455 or 0800 032 6417).

Using phone support I was able to log a password reset request, once the Technical Support Engineer had confirmed my details.  Because there was no phone number shown on my records, he had to email me so that I could respond with the details. Bearing in mind that I was locked out of my account, this could have been a problem but thankfully Outlook was still connected to Office 365 from my Mac.

After 26 minutes on the phone (at great expense to Microsoft, I guess), I finally had a temporary password to reset my account and then log in as normal.

Goodness knows how I’d have managed if I hadn’t been able to receive an email on the account – although the contact preferences on my Office 365 profile showed a phone number, there was no number in the information for my mailbox… so, lesson number 1, make sure you have a phone number in your mailbox properties (lesson 2 might be to have password resets sent to an alternative mailbox but that seems daft as it’s also where other announcements will end up…).

I’ve decided that I’ll reset my password when I feel like it, rather than when the system says so and making this change involves some PowerShell:

  • First up, install the Office 365 cmdlets (intended for enterprises, not all of them will work on small business accounts). There are two components to install: the Microsoft Online Services Sign-In Assistant; and the Microsoft Online Services Module for Windows PowerShell.
  • Next, connect PowerShell to Office 365 by either opening the Microsoft Online Services Module for PowerShell or opening a normal PowerShell session and typing import-module MSOnline.
  • Authenticate the session by typing Connect-MsolService

(An older method from Office 365 beta can be found in my previous post on changing the primary email address for Office 365 users – I haven’t tested it recently, but I see no reason why the standard Exchange cmdlets wouldn’t still work on Office 365)

  • Finally, disable password expiration with the following command (replacing MicrosoftOnlineServicesID with the appropriate username):
    Set-MsolUser -UserPrincipalName MicrosoftOnlineServicesID -PasswordNeverExpires $true

Adding extra social sharing services to WordPress with JetPack (ShareDaddy)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last night, as part of the rebuild of this site, I reinstated the social sharing links for each post. In the old site they had been implemented as bespoke code using each social network’s recommended approach (e.g. Twitter or Facebook‘s official button codes) but presentation becomes problematic, with each button having a slightly different format and needing some CSS trickery to get it right.

I looked into a variety of plugins but they all had issues – either with formatting or functionality – until I stumbled across reference to WordPress.com’s social sharing capabilities.  If only I could have that functionality on a self-hosted (WordPress.org) site…

…As it happens, I can – WordPress.com’s social sharing is based on the ShareDaddy plugin, which is part of a collection called JetPack. ShareDaddy is also available as a freestanding plugin but now I have JetPack installed I’m finding some of the other functionality it gives me useful (and it’s not possible to activate ShareDaddy if you have JetPack installed).

I need to make some changes (like working out how to hack the code and turn off the count next to my Tweet/Like/+1 buttons – it’s embarrassing when the number is small!) but I’m happy enough with the result for now.  One thing I did need to do though was to add some services that are not yet in the JetPack version of the plugin (one of the major advantages of ShareDaddy is how simple it is to do this).

The 2011 Computer Weekly Social Media Awards

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Just over a year ago, we launched the Fujitsu UK and Ireland CTO Blog – written by our Chief Innovation and Technology Officer, David Smith. It’s always been our intention to draw on a combination of external go-to-market and internal IT capability knowledge to produce content that translates IT trends into potential business value but one thing I’m particularly proud of is that we do not use external writers – what you read here is written by David or by one of his team.

Now I’m pleased to say that we’ve been shortlisted for the 2011 Computer Weekly Social Media Awards so I’d like to ask for your support.  If you appreciate what we’re doing,  (either by clicking on the link or by scanning the QR code on this post).

Whilst I’m sure that there will be many people supporting us, I’m equally convinced that there are some things we could do better. With that in mind, if you have feedback that might help us provide better insights and add more value through this blog, then please do leave a comment – we really would like to know what you think.

Thank you for all of your support.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Souping up SyncToy

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I used to back up my work PC to a set of Virtual Hard Disk (.VHD) files until one day I needed to recover from a failure, and I found that the hard drive encryption software we use prevented me from running a restore. That forced me to find another solution and one of my ReadyNAS devices (sadly not the one that recently suffered two disk failures on the same RAID 1 volume, taking with it a big chunk of my data) is now dedicated to backing up my work PC, with a regular file copy taking place.

I have a drive mapped to a share on the NAS and the command line version of Microsoft’s SyncToy tool (synctoycmd.exe) is set to run as a scheduled task every evening at 10pm. Then, at 11pm, the NAS powers down until 8am the next day. The idea is that, as long as my PC is connected to my home network, it backs up all of the important files, at a time by which I should have stopped working.

Unfortunately I’m not convinced that it’s working as it should be – just because the Windows 7 Task Scheduler tells me that the task completed doesn’t mean that SyncToy ran successfully (incidentally, if you are having problems with SyncToy on Windows 7, this thread might help).  I was googling for a solution and came across eXDee’s batch files (sometimes the old ways are the best) to check for network connectivity, presence of the appropriate volume and then run synctoycmd.exe, recording a log file on the way. Bingo.

So, here are my versions (only minor updates from eXDee’s originals), called each night from Task Scheduler and a simple check of the lastsync.log file should tell me whether the backup worked or not.

Incidentally, don’t be fooled (as I was) by the synctoycmd.exe output that says it saved time by not copying any files. That’s the output from the preview run and there is a long period after this during which there are no status updates whilst the actual file copies take place.

synctoy.bat

This is the control file, to be called from Task Scheduler or run manually from the command line:
@echo off
title SyncToy run in progress…
echo Attempting file sync. Please wait…
sync.bat >lastsync.log

sync.bat

This is the file that checks for the presence of my NAS and for a mapped drive before it backs up my data. You’ll need to subsititue your own IP address but I’m particularly impressed by eXDee’s code to look for a TTL rather than a ping success/failure (smart move). Note I haven’t mapped a drive if the connection is not there, although that is a possible enhancement:
@echo off
echo SyncToy Log starting at
time /T
date /T
echo ##############################################
echo Checking connection to NAS…
echo ##############################################
PING -n 2 -w 10 192.168.1.14 |find “TTL=” && goto NAS
goto PINGFAIL

:NAS
echo ##############################################
echo NAS is online. Checking for share…
if exist “F:\Synced with Company PC\” goto SYNC
goto NASFAIL

:SYNC
echo ##############################################
echo Drive is mapped. Begin syncing files…
echo ##############################################
cd “C:\Program Files\SyncToy 2.1\”
SyncToyCmd.exe -R
if %ERRORLEVEL% == 0 goto SUCCESS
goto SYNCFAIL

:PINGFAIL
echo ##############################################
echo NAS not found. Exiting
goto END

:NASFAIL
echo ##############################################
echo Share not found. Exiting
goto END

:SUCCESS
echo ##############################################
echo Synctoy finished successfully. Exiting
goto END

:SYNCFAIL
echo ##############################################
echo Synctoy Failed. Exiting
goto END

:END
echo ##############################################
echo Synctoy Log ending at
time /T
date /T

lastsync.log

An example of a run (the failures were down to file access, rather than any issue with the scripts):

SyncToy Log starting at
21:00
08/11/2011
##############################################
Checking connection to NAS…
##############################################
Reply from 192.168.1.14: bytes=32 time=3ms TTL=64
Reply from 192.168.1.14: bytes=32 time=39ms TTL=64
##############################################
NAS is online. Checking for share…
##############################################
Drive is mapped. Begin syncing files…
##############################################
Preview of Work Folder Backup (C:\Users\markw\Documents\Work\, F:\Synced with company PC\Work\) in time 00:03:08:253.
SyncToy action was ‘Echo’
Found 2 actions to perform.
Found 47,158 files that did not require action.
Analyzed 250.5 files per second.
Avoided copying 135,013,767,205 bytes in 47,158 files.
Saved approximately 03:00:27:00 by not copying any files.

SyncToy run of Work Folder Backup (C:\Users\markw\Documents\Work\, F:\Synced with company PC\Work\) completed at 08/11/2011 21:03:27.
SyncToy action was ‘Echo’.
SyncToy options were:
Active for run all
All files included
No files excluded
Do not check file contents
Include read-only files
Include hidden files
Include system files
Backup older files (send to Recycle Bin)
All subfolders included
SyncToy run took 00:00:00:610.
Copied 5,932,607,488 bytes in 2 files in 00:00:00:610.
Bytes per second 9,725,586,045.9, files per second 3.3.
Avoided copying 135,013,767,205 bytes in 47,158 files that did not require action.
Saved approximately 00:00:13:882 by not copying all files.
Warning: 4 failures occured.
You can retry by selecting “Run” again or select “Preview” to see
the operations that remain to be performed.

The Sync operation completed successfully on folder pair ‘Work Folder Backup’ but some files were skipped. Please look at the logs for more details.
##############################################
Synctoy Failed. Exiting
##############################################
Synctoy Log ending at
21:03
08/11/2011

More on NoSQL, Hadoop and Microsoft’s entry to the world of big data

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, my article on Microsoft’s forays into the world of big data went up on Cloud Pro. It’s been fun learning a bit about the subject (far more than is in that article – because big data is a big theme in my work at the moment) and I wanted to share some more info that didn’t fit into my allotted 1000 words.

Microsoft Fellow Dr David DeWitt gave an excellent keynote on Day 3 of the SQL PASS 2011 summit last month and it’s a great overview of how Hadoop works. Of course, he has a bias towards use of RDBMS systems but the video is well worth watching for it’s introduction to NoSQL, the differences between key value stores and Hadoop-type systems, and the description of the Hadoop components and how they fit together (skip the first 18 minutes and, if the stream doesn’t work, try the download – the deck is available too). Grant Fritchey and Jen McCown have written some great notes to go with Dr DeWitt’s keynote too.  For more about when you might use Hadoop, Jeremiah Peschka has a good post.

Microsoft’s SQOOP implementation is not the first – Cloudera have been integrating SQL and Hadoop for a couple of years now. Meanwhile, Buck Woody has a great overview of Microsoft’s efforts in the big data space.

I also mentioned Microsoft StreamInsight (formerly code-named “Austin”) in the post (the Complex Event Processing capability inside SQL Server 2008 R2) and Microsoft’s StreamInsight Team has posted what they call “the basics” of event processing. It seems to require coding, but is probably useful to anyone who is getting started with this stuff. For those of us who are a little less code-oriented, Andrew Fryer’s overview of StreamInsight (together with a more general post on CEP) is worth a read, together with Simon Munro’s post on where StreamInsight fits in.

Shortly after I sent my article to Cloud Pro’s Editor, I saw Mike Walsh’s “Microsoft Loves Your Big Data” post. I like this because it cuts through the press announcements and talks about what is really going on: interoperability; and becoming a player themselves. Critically:

“They aren’t copying, or borrowing or trying to redo… they are embracing”

And that is what I really think makes a refreshing change.

Handy to know about: fuel cover emergency release on an Audi A4

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of days ago, my wife called me and said the low fuel warning light had come on on my car as she set out to take the kids swimming (a 25 mile round trip). “No worries”, I said, “you’ve got enough to get home – I’ll fill it up later”. Fast forward to today, when I drove to the filling station only to find that the cover on the fuel filler cap (controlled by the central locking) wouldn’t open.  Thankfully, I was close to home, so I went back (fuel range now showing as 5 miles!) and called the lease company’s breakdown service, who said I might have to wait up to 90 minutes for a technician. Not great, but acceptable – and at least I was home.

A few minutes later I got a call from Volkswagen/Audi Assistance and 15 minutes after that the technician was on site (the RAC provide the Volkswagen/Audi Assistance service – but with dedicated technicians, so a different queue).

I explained the problem and he tried (and failed) to open the fuel cover the same way that I did… then he popped open the boot, removed a cover and pulled on a wire – which promptly opened the offending fuel door.  Result! If only I’d known about it at the petrol station an hour earlier. (For reference, the car is a 2009 Audi A4 Avant – the B8 model – but I wouldn’t be surprised if the A5 has a similar mechanism.)

So full marks to VW/Audi Assistance – both for the rapid response and for following me to the filling station in case I ran out of diesel on the way.

And, for anyone else with a fuel cover that’s linked to the central locking on the car, it might be worth checking if there is an emergency release…

SQL Server and Hadoop – unlikely bedfellows but a powerful combination

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Big Data is hard to avoid – what does Microsoft’s embrace of Hadoop mean for IT Managers?

There are two words that seem particularly difficult to avoid at the moment: big data. Infrastructure guys instinctivly shy away from data but such is its prevalence that big data is much more than just the latest IT buzzword and is becoming a major theme in our industry right now

But what does “big data” actually mean? It’s one of those phrases that, like cloud computing earlier, it is being “adopted” by vendors to mean whatever they want it to.

The McKinsey Global Institute describes big data as “the next frontier for innovation, competition and productivity” but, put simply, it’s about analysing masses of unstructured (or semi-structured) data which, until recently, was considered too expensive to do anything with.

That data comes from a variety of sources including sensors, social networks and digital media and it includes text, audio, video, click-streams, log files and more. Cynics who scoff at the description of “big” data (what’s next, “huge” data?) miss the point that it’s not just about the volume of the data (typically many petabytes) but also the variety and frequency of that data. Some even refer to it as “nano data” because what we’re actually looking at is massive sets of very small data.

Processing big data typically involves distributed computer systems and one project that has come to the fore is Apache Hadoop – a framework for development of open-source software for reliable, scalable distributed computing.

Over the last few weeks though, there have been some significant announcements from established IT players, not all of whom are known for embracing open source technology. This indicates a growing acceptance for big data solutions in general and specifically for solutions that include both open- and closed- source elements.

When Microsoft released a SQL Server-Hadoop (SQOOP) Connector,there were questions about what this would mean for CIOs and IT Managers who may previously have viewed technologies like Hadoop as a little esoteric.

The key to understanding what this would mean would be understanding the two main types of data: structured and unstructured. Structured data tends to be stored in a relational database management system (RDBMS), for example Microsoft SQL Server, IBM DB2, Oracle 11G or MySQL.

By structuring the data with a schema, tables, keys and all manner of relationships it’s possible to run queries (with a language like SQL) to analyse the data and techniques have developed over the years to optimise those queries. By contrast, unstructured data has no schema (at least not a formal one) and may be as simple as a set of files.  Structured data offers maturity, stability and efficiency but unstructured data offers flexibility.

Secondly, there needs to be an understanding of the term “NoSQL”.  Commonly misinterpreted as an instruction (no to SQL), it really means not only SQL – i.e. there are some types of data that are not worth storing in an RDBMS.  Rather than following the database model of extract, transform and load (ETL), with a NoSQL system the data arrives and the application knows how to interpret the data, providing a faster time to insight from data acquisition.

Just as there are two main types of data, there are two main types of NoSQL system: key/value stores (like MongoDB or Windows Azure Table Storage) can be thought of as NoSQL OLTP; Hadoop is more like NoSQL data warehousing and is particularly suited to storing and analysing massive data sets.

One of the key elements towards understanding Hadoop is understanding how the various Hadoop components work together. There’s a degree of complexity so perhaps it’s best to summarise  by saying that the Hadoop stack consists of a highly distributed, fault tolerant, file system (HDFS) and the MapReduce framework for writing and executing distributed, fault tolerant, algorithms. Built on top of that are query languages (live Hive and Pig) and then we have the layer where Microsoft’s SQOOP connector sits, connecting the two worlds of structured and unstructured data.

The trouble is that SQOOP is just a bridge – and not a particularly efficient one either – working on SQL data in the unstructured world involves subdivision of the SQL database so that MapReduce can work correctly.

Because most enterprises have both the structured and unstructured data, we really need tools that allow us to analyse and manage data in multiple environments – ideally without having to go back and forth. That’s why there are  so many vendors jumping on the big data bandwagon but it seems that a SQOOP connector is not the only work Microsoft is doing in the big data space:

In our increasingly cloudy world, infrastructure and platforms are rapidly becoming commoditised. We need to focus on software that allows us to derive value from data to gain some business value. Consider that Microsoft is only one vendor, then think about what Oracle, IBM, Fujitsu and others are doing. If you weren’t convinced before, maybe HP’s Autonomy purchase is starting to make sense now?

Looking specifically at Microsoft’s developments in the big data world, it therefore makes sense to see the company get closer to Hadoop. The world has spoken and the de facto solution for analysing large data sets seems to be HDFS/MapReduce/Hive (or similar).

Maybe Hadoop’s success comes down to HDFS and MapReduce being based on work from Google whilst Hive and Pig are supported by Facebook and Yahoo respectively (i.e. they are all from established Internet businesses).  But, by embracing Hadoop (together with porting its tools to competitive platforms), Microsoft is better placed to support the entire enterprise with both their structured and unstructured needs.

[This post was originally written as an article for Cloud Pro.]