Photographing the night sky

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last Saturday saw a “supermoon” – where the Earth’s moon is both full and in perigeesyzygy and so appears larger than usual.  I went out to try and shoot some images of the moon over the river but it was too high in the sky for the effect I was really after and it slipped behind some clouds.  Later on, the sky cleared and I got out my longest lens to take a picture of the moon (albeit without anything to put its size into context) and this was the result:

Supermoon 2

I was pretty pleased with this – especially when I compared it with one taken from the International Space Station! – although  some of the other shots in the press showed an aeroplane silhouetted agains the moon, or the moon appearing more “yellow” as it rose.  It’s just a straight shot from my DSLR with a 500mm lens, tripod mounted, at a mid-aperture (f11) and slow-ish 1/125″ shutter speed, with ISO set to 200, and with exposure compensation set at -5 EV (to stop the moon from “bleaching out”). Other than a crop and saving it as a JPG, that’s about all it’s had done to it.

But taking photos of the night sky is not usually so straightforward and, previously, my best results had been using a similar setup, but shooting in daylight.  The image below was taken a couple of summers ago, whilst on holiday in France:

La Lune

For this shot, I used an entirely different technique – it was actually taken with a clear blue sky, in the evening, converted to black and white and then the black clipping was increased to make sure the blacks really were black.

Anyway, back to the supermoon – nice though it is, anyone can take pictures of the moon – but what about planets in our solar system? Without a telescope?

SaturnJoe Baguley tipped me off that Saturn was just down and to the left of the moon at that time and sent me a link to his shot, in which Saturn is just 12 pixels wide but its rings are clearly visible. My response: “Wow!”. I went back outside with my gear and tried to replicate it but it seems that, on this occasion, his Canon 5D Mk2’s pixel density beat my Nikon D700’s – 12 pixels wide at 400mm equates to 15 pixels at 500mm, but with 2,400,000 pixels per cm2 on the 5D vs. 1.4 on the D700. That meant I was looking at something just 9px wide and that sky was very, very dark… maybe I need to go out and buy a telescope (for my sons of course!).

(Thanks to Benjamin Ellis, who inspired me to go and take pictures of the moon on Saturday evening, to Joe Baguley for permission to use his picture of Saturn, and to Andy Sinclair, who suggested I should write this post. The two images of the moon used in this post are © 2009-2011 Mark Wilson, all rights reserved. The image of Saturn is © 2011 Joe Baguley, all rights reserved. All three images are therefore excluded from the Creative Commons license used for the rest of this site.)

Adapt, evolve, innovate – or face extinction

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written before (some might say too often) about the impact of tablet computers (and smartphones) on enterprise IT. This morning, Andy Mulholland, Global CTO at Capgemini, wrote a blog post that grabbed my attention, when he posited that tablets and smartphones are the disruptive change lever that is required to drive a new business technology wave.

In the post, he highlighted the incredible increase in smartphone and tablet sales (also the subject of an article in The Economist which looks at how Dell and HP are reinventing themselves in an age of mobile devices, cloud computing and “verticalisation”), that Forrester sees 2011 as the year of the tablet (further driving IT consumerisation), and that this current phase of disruption is not dissimilar to the disruption brought about by the PC in the 1980s.

Andy then goes on to cite a resistance to user-driven adoption of [devices such as] tablets and XaaS [something-as-a-service] but it seems to me that it’s not CIOs that are blocking either tablets/smartphones or XaaS.

CIOs may have legitimate concerns about security, business case, or unproven technology – i.e. where is the benefit? And for which end-user roles? – but many CIOs have the imagination to transform the business, they just have other programmes that are taking priority.

With regards to tablets, I don’t believe it’s the threat to traditional client-server IT that’s the issue, more that the current tranche of tablet devices are not yet suitable to replace PCs. As for XaaS (effectively cloud computing), somewhat ironically, it’s some of the IT service providers who have the most to lose from the shift to the cloud: firstly, there’s the issue of “robbing Peter to pay Paul” – eroding existing markets to participate in this brave new world of cloud computing; secondly it forces a move from a model that provides a guaranteed revenue stream to an on-demand model, one that involves prediction – and uncertainty.

Ultimately it’s about evolution – as an industry we all have to evolve (and innovate), to avoid becoming irrelevant, especially as other revenue streams trend towards commoditisation.

Meanwhile, both customers and IT service providers need to work together on innovative approaches that allow us to adapt and use technologies (of which tablets and XaaS are just examples) to disrupt the status quo and drive through business change.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Connecting on-premise applications with the Windows Azure platform (Windows Server User Group)

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

When Microsoft announced Windows Azure, one of my questions was “what does that mean for IT Pros?”. There’s loads of information to help developers write applications for the cloud, but what about those of us who do infrastructure: servers, networks, and other such things?

In truth, everything becomes commoditised in time and, as Quest’s Joe Bagueley pointed out on Twitter a few days ago, infrastructure as a service (IaaS) will become commoditised as platform as a service (PaaS) solutions take over and there will come a time when we care about what hypervisor we are running on about as much as we care about network drivers today. That is to say that, someone might care but, for most of us, we’ll be consuming commodity services and we won’t need to know about the underlying infrastructure.

So, what will there for for server admins to do? Well, that takes me back to Windows Azure (which is a PaaS solution). For some time now, I’ve been keen to learn about integrating on and off-premise systems – for example getting application components that are running on Windows Server working with other parts of the application in Windows Azure. To do this, Microsoft has created Windows Azure Connect – a new Windows Azure service that enables customers to setup secure, IP-level network connectivity between their Windows Azure compute services and existing, on-premise resources. This allows Windows Azure applications to leverage and integrate with existing infrastructure investments in order to ease adoption of Azure in the enterprise – and I’m really pleased that, after nearly a year of trying to set something up, the Windows Server User Group (WSUG) is running a Live Meeting on this topic (thanks to a lot of help from Phil Winstanley, ex-MVP and now native at Microsoft).

Our speaker will be Allan Naim, an Azure Architect Evangelist at Microsoft. Allan has more than 15 years of experience designing and building distributed middleware applications including both custom and off the shelf Enterprise Application Integration architectures and, on the evening of 22 March 2011 (starting at 19:00 GMT), he’ll spend an hour taking us through Windows Azure Connect.

Combined with the event that Mark Parris has organised for 6 April 2011 where one of the topics is Active Directory Federation Services (AD-FS), these two WSUG sessions should give Windows Server administrators a great opportunity to learn about integrating Windows Server and Windows Azure.

Register for the Azure Connect Live Meeting now. Why not register for the AD RMS and AD FS in-person event too?

[A version of this post also appears on the Windows Server User Group blog]

A few things I learned when I “lost” my mobile phone

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago, I lost my mobile phone. Well, not so much “lost” (I was pretty sure I’d left it in the office) but I realised that I had “misplaced” it. I’m on a SIM-only 30-day contract and my handset is an-aging-Nokia-thing-running-some-awful-Symbian-operating-system so I wasn’t that concerned but the 72 hours it would take me to be reunited with it was too long to risk if someone had taken it as their own, so I called the mobile operator (O2) and put a block on the SIM.

In sixteen years of mobile phone ownership, this was my first experience of this process, and I learned a few things along the way – hence this blog post.

O2 sent me a new SIM (to use in a spare handset, or in mine, should I find it again) but there were no details in the envelope that told me where/how to activate the SIM. It turns out that I could do that on the My New SIM section of the O2 website.

As it happens, my phone was handed in at work, and I got it back in a few days. I can’t have two SIMs active at the same time, but I could keep one of them as a “spare” for future use.

I spoke to an O2 representative, who lifted the bar on my original SIM. O2 advised me that this could take up to 24 hours although, in practice, it was a much shorter time (about 30 minutes) but my calls were still on permanent divert to voicemail. What they hadn’t told me was that they had also barred the last handset that my SIM had been used in (based on the IMEI) and that could take up to 72 hours to lift. Again, it didn’t take that long in practice and, after a few hours, and a couple of phone resets (to force the network to recognise it), my full mobile service was restored.

Why paper.li is just plain wrong

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

When I first saw paper.lilast year, I thought it was an interesting concept. Kind of like the Flipboard app on my iPad, although nowhere near as attractive, but universally available, picking the most popular updates from my Twitter and Facebook “friends” and presenting them to me in a newspaper format. I quickly grew tired of the format though along with the increasing number of tweets telling me that “The Daily<insert name of person> is out” – I can see the value for an individual but tweeting about it just seems a bit spammy. (Sorry if you’re one of the people that does this – if you think there is some real value, I’d be pleased to hear your view.)

More worrying though is the way that paper.li seems to misrepresent my views and opinions when it “retweets” me…

I work for a Japanese company and spent a lot of Friday and the weekend thinking of colleagues whose friends and family might be affected by the recent events in Japan. For that reason, I was appalled to see a ZD Net article last Friday questioning whether the iPad 2 would be hit by supply problems as a consequence.

I can see why the writer/publisher put this out (perhaps it is a legitimate concern for some) but really, in the big scheme of things, does a shortage of NAND memory matter that much, given the scale of the human disaster in Japan?  Any iPad 2 supply chain issues strike me as a “first world problem” and, even though the earthquake/tsunami did strike on iPad 2 launch day (presumably why this was newsworthy to ZD Net), couldn’t the publisher have held back, if only for reasons of taste and decency? I tweeted:

RT @ZDNet: Will the earthquake in Japan ding Apple’s iPad 2 rollout? http://zd.net/ibvmgp ^MW FFS get a grip. Bigger issues at stake here!

(If you’re not familiar with the FFS acronym, don’t worry, I was just expressing my frustration.)

I think that tweet is pretty clear, I’m RT (retweeting) ZD Net’s tweet about their article, with a comment – in the socially-acceptable manner for the Twitter community (the “new-style” RT built into Twitter misses the ability/potential added value of a comment).

Unfortunately, when I saw paper.li’s version, it was completely out of context:

Paper.li appearing to credit me with a ZDNet article about iPad 2 delays following the Japanese earthquake/tsunami (and with which I disagree!)

It simply grabs the title and first few lines from the link and credits the person who retweeted it (me) as the source. Not only does this appear to be crediting me as the author of the article, which I would be uncomfortable with, even if I did approve of the content but, in this case, I fundamentally disagree with the article and would certainly not want to be associated with it. 

Paper.li does include the ability to stop mentions, but that misses the point – by all means mention my tweets but they should really make it clear who the original source of an article is and, where that’s not possible, include the whole tweet to ensure that it remains in context.

And it seems I’m not the only one to see issues with the way in which Paper.li uses the Twitter API, disregarding the social networking element of Twitter. Then there’s the fact that some people thank others for mentioning them in their paper.li edition (which, of course, was entirely automated).

Indeed, I’d go as far as to say that the way paper.li handles retweets is sloppy and demonstrates a lack of knowledge/understanding on how platforms like Twitter really work – they are (or should be) about conversation, not broadcast – and that’s why the newspaper format is really not a good fit.

Rambling thoughts: Windows 7 service pack 1, full drive encryption, mounting virtual hard disks, and a PC rebuild

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, a whole heap of stuff “happens” to me that I think would make a good blog post, if only I had the time to do a little more research and pull it all together. This time, I haven’t done the extra research and I don’t have an answer, but I’ll publish my thoughts anyway. Maybe, someone else can fill in the gaps, if they think it might help.

It all started out in 2009, when one of my colleagues left the company and I inherited his (slightly newer than mine) notebook PC, complete with a 250GB hard disk (I was pushing the limits of my 120GB disk at the time). Transferring my data was easy – I just used Symantec Ghost (or something similar) to image my 120GB drive onto the 250GB disk, but someone had created the original system with the Windows 7 system reserved partition at the end of the disk, leaving me with the following layout:

  • C: System
  • D: Data
  • System Reserved
  • Free Space

I could probably have moved the reserved partition and expanded D: but, in the interests of time, I created a new partition (E: Extra) and used that instead.

Fast-forward a couple of years and I wanted to install Windows 7 service pack 1 on my machine. Unfortunately the installer needed 8GB of free space on C: and I only had 7GB, even after housekeeping. My intention was to shrink D:, move it, expand C:, and maybe even move the reserved partition, merging D: and E: to come up with a sensible layout.

There was just one problem – in the meantime my organisation had started to use a full disc encryption product (not Microsoft BitLocker, because deployment commenced whilst most of the organisation was still on Windows XP), so I couldn’t use third-party disk partition editors (like GPartEd) as booting from a Live CD left the disk locked.

One possible answer lay in a complete system image, which, helpfully, creates some virtual hard disks for my multitude of drives. Then, I thought, I could mount the VHD copy of D:, remove the letter from the physical drive D: and reboot, to use the virtual disk instead (before removing the original D: and expanding C:). Still with me? Even if you’re not, it didn’t work…

It seems there are two problems with mounting VHDs:

“No worries”, I thought, I’ll reassign drive letter D: from the virtual hard disk, back to the original physical partition. That worked, but I still couldn’t load any user profiles – only the administrator could log on, and they were given a new profile based on system defaults. Oh dear.

I couldn’t find any obvious advice on viewing/restoring whatever identifiers Windows was looking for in order to find the correct partition for my domain user account, so I decided to restore the partition from backup. Except that the full disc encryption software seemed to prevent it, not just when booted from a recovery disc or from a boot time selection to repair my computer, but also when attempting the restore from within Windows Backup.

In the end, the simplest solution was to have my machine rebuilt onto the latest corporate build, and then to restore my data by mounting the VHDs in my backup set (which are no longer identical to my physical disk partitions and so do not cause problems). Perhaps it really is time for me to stop being a geek, and to concentrate on using my PC as a business tool…

Restoring the link between Twitter and Facebook status updates

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I make no secret of the fact that I detest Facebook.  I despise its user interface (which I find confusing and difficult to navigate).  I find Facebook’s approach to security and privacy concerning. And I’m much happier with a single web of sites than one that consists of several huge islands – even if one of those islands currently has around 600 million users.

Unfortunately, normal (i.e. non-geek/media) people, including many of my friends and family, insist on using Facebook, so I have an account there.  Actually, until earlier this week, I had two (one personal, one professional) - then I learned that was in violation of Facebook’s terms of service and, if detected, Facebook reserves the right to terminate all of your accounts, so I deleted one of them.

Because I dislike Facebook so intensely, I’m not a great citizen there – I log in from time to time, and that means that I don’t really participate in the network properly.  My Facebook status updates are populated from my personal Twitter account and I recieve e-mail updates on any comments. But recently, I noticed that the Twitter Facebook app was not updating my Facebook status and several of my friends were having the same problem.

I tried denying the app access to my Twitter account and then allowing it again, but that didn’t seem to work. Then I found a Facebook discussion thread, where Derek Lau wrote:

“Go to http://apps.facebook.com/twitter/
Uncheck the box that says ‘Allow Twitter to post updates to: Facebook Profile’
Refresh the page to make sure that the box really is unchecked.
When the box is unchecked, send a tweet. This will obviously not get posted to FB.
Go back and re-check the box. Refresh the page to make sure it is really checked.
Now send another tweet.”

Twitter app on Facebook

Sweet. Now my Twitter updates are populating Facebook again, and I can go back to ignoring it for a while longer…