This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I’ve been trying to back up my notebook PC in the form of a Windows 7 System Image (which will, helpfully, create some VHDs for me) but kept on coming up against the following error:
Create a system image
The backup failed.
The operation failed due to a device error encountered with either the source or the destination. If the source or the destination volume is on a disk, run CHKDSK /R on the source or destination volume and then retry the operation. (0x8078012D)
Additional Information:
The request could not be performed because of an I/O device error. (0x8007045D)
I’m pretty sure that my disks are OK but it struck me this might be a side effect of using a third-party full disk encryption product (BeCrypt DiskProtect) so I checked to see if colleagues were able to back up their systems (they were).
Unfortunately, it takes a couple of hours to reproduce the error, so I didn’t take the usual, logical, step by step approach to resolving this one. It was either:
0x8078012D – Bad sector, fixed with chkdsk /r (which takes an age to run on an encrypted volume)
or
0x8007045D – Manually starting the Volume Shadow Copy service.
Either way, these to changes let me complete the backup successfully – and this post may help someone else in the same situation on day…
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
One of my current activities involves sharing the contents of a SharePoint calendar, which is hosted on an Intranet site, with external contacts. An extranet portal would be one possible approach but it’s probably over-engineering the solution and a simple calendar export, updated on a regular basis would also suit the requirement.
SharePoint allows RSS export from a Calendar but the events are exported in the order in which they were added to the calendar, rather than in chronolical order. I thought it would be far more useful to export them in iCalendar format and it turns out that’s possible too – with the addition of an open source webpart called iCal Exporter (which my colleague Andrew Richardson tracked down). You can also interrogate the SharePoint object model directly but that’s beyond my limited coding abilities.
Installing the webpart is pretty straightforward:
Unzip the compiled version of the iCal Exporter webpart and copy the iCalExporter.wsp file to the hard drive on a SharePoint server (I used Windows SharePoint Services 3.0).
From the command prompt, issue the following commands to navigate to the folder containing stsadm.exe, install the solution and deploy the solution: cd “%commonprogramfiles%\Microsoft Shared\Web Server Extensions\12\binâ€
stsadm –o addsolution –filename “c:\iCalExporter.wspâ€
stsadm –o deploysolution –name iCalExporter.wsp –local (it may be necessary to specify other options if deploying in a multi-server environments.)
Using a web browser, navigate to the server’s site collection features page and click the Activate button on the iCalendar export button feature.
Once installed, there is an additional option on the Actions menu export the calendar in iCalendar format. Give the resulting file an .ics extension and distribute it at will – most calendar clients (I tested in Outlook but it should work for others too) will be able to view the appointment details.
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Last week was Social Media Week and, here in the UK, there were several events to mark the occasion (using the Twitter hashtag #smwldn). I was pretty late to the party and I’m sure I missed some events that could have been incredibly useful but I did get along to one event, looking at the implications of using social media to take a brand into a global marketplace.
Chaired by UK Trade and Investment (UKTI) at The Design Council, and chaired by Dr Aleks Kratoski (@aleksk), UKTI’s New Media Sector Champion but also well known for her broadcasting work with the BBC and the Guardian, the event took the form of a panel discussion with:
It’s difficult to distill a panel discussion into a blog post, so I’ll concentrate on some of the key points that I picked up in the event. It’s important to comment that these are my notes – they are not direct quotes (and any additions in [ ] are my personal views, provided to add context).
WT-W: Twitter helped Moonfruit to move into the United States.
AC: Nokia and many other brands find themselves in the same situation – social is evolving and they’re still discovering.
MC: Microsoft has engaged in social media for about five years, initially via blogs and forums but use has exploded with Twitter and Facebook – platforms that enable bigger networks. Microsoft doesn’t sell much 1:1 but relies on partner network – social media has allowed communication with customers. It’s important to enthuse the passion and to do so it’s necessary to understand rich media, mobile, gaming (wherever the audience is) and use that data to make products better and more fulfilling.
WT-W: Facebook, Twitter and others are platforms upon which others can build a business – they have fundamentally changed how technology businesses work – and can affect others too.
AK: Web 2.0 is doing for the web what it should really have done to connect people in the first place.
AC: Social media presents an opportunity to rewrite the rules. New technology can appear and revolutionise the marketplace in a heartbeat (e.g. the Apple iPhone) [and organisations need to be agile in order to respond – somewhat ironic coming on the day of press coverage re: Nokia CEO Stephen Elop’s (@selop) “burning platform” memo]
AK: It sounds like the answer is to leverage other platforms rather than creating new?
MC: Blogs play a huge part in Microsoft’s social media strategy – as does Windows Live – so it’s not a conscious decision not to create their own platform, more that it’s necessary to invest effort into communicating on a platform where people are already working. Remember though that email is still a social platform – albeit private. Same for instant messaging. There are open and closed networks – digital marketing needs to be involved in them all.
AC: Some businesses think people should pay to come – or that they own the brand. Ultimately community is the brand. This can this make you reactive rather than proactive, but it’s just as important to experiment and learn. Some mistakes will be made along the way but if organisations understand consumer sentiment (listen via various platforms) and use this insight in product development it can be turned to an advantage. Social media has a massive part to play in this – even labs need to be more transparent and open – become part of society to understand what’s coming next.
WT-W: Distribution is key – how can people hear about a product without spending millions on marketing (which is also imprecise)? You have to use social media to understand how it works (Twitter is an example) – and have to learn to use it properly – you can’t just throw out a message. Moonfruit ran a competition but the community started to take the mickey out of the brand (what’s that? A gay astronaut!), in doing so, they built brand awareness. It can be tough for big companies to let go of their brand but Moonfruit found it got them a lot of coverage, they rose in the Google ranks, and enjoyed massive returns as a result – they could not have paid for that type of coverage. Maybe they got lucky but what have they learned? Respect social media – look after you Twitter followers, hire a community manager – and make Twitter and Facebook an extension of any internal communities. Moonfruit is using Zendesk to tie channels together so users can search across wider social networks.
MC: Microsoft is looking to expand its communities into Asia – breaking out of the English language. Using analytics (it’s important to measure) they can see how to expand. Often, there are parts of an organisation that says “We want to get onto Twitter and Facebook!” and the next question should be “Why?”. Often the answer is “Because everyone else is” and it’s necessary to have a plan? (not just to start using these platforms). Brands need to understand their goals, research their audience, know the way that people access information, etc. and tailor, rather than just reworking existing United States-based templates! Also be careful about picking and choosing appropriate networks – and taking baby steps to learn as you go. Play with your own (personal) account and then apply that experience to the company.
AK: In the far east, it’s necessary to deal with linguistics as well as specialist social networks compared with India, where people jump on to existing, established English-speaking networks such as Facebook, etc.
MC: Linguistics are even important in Europe – whilst the Dutch will RT English with a localised view, the French will not!
AC: Brands will embrace global messaging (generally in English) but need to work on their corporate policy for conversations at a local level. Campaigns need to give a reason to want to engage with them – just as at a dinner party, you need an interesting menu [and sparking conversation]. Although internationalisation is an important element, it’s necessary to consider the global view.
WT-W: The UK is leading in entrepreneurial design – and Apple has shown [globally] that a good design and user interface [arguable] will appeal to consumers. The iPad paradigm shift has opened technology to both young and old. The Chinese come to the UK to learn how to design, then they go back to China and emulate using the skills they have learned.
AC: Due to its location and geography, Great Britain as an island is a barometer for change. Nokia looks closely at its own business and the wider industry in the UK and finds that it’s possible to see trends coming and go quickly, compared with the US, where things stick around a while and take a different form. British people also tend to be forthcoming with a straightforward, honest response.
MC: Microsoft Advertising has been helped by being London based, driving global strategy in a US-based company (enforcing that it’s not just about the United States).
Moving on to the questions from the audience:
MC: [On the question of companies that still think they can control who comes to them on social media?] Try to get a feel for what people like, read listen and learn – they will leave comments! It’s a two-way conversation so, taking the dinner analogy, you can suggest menu and a venue but can’t control the conversation. You can plan for it though: react; turn negative into positive through a reply/response. Take baby steps – don’t dive in too deep too soon. Demonstrate success at each step on the journey.
WT-W: You can’t gag social media – the more you try, the more it happens – so have an open conversation. An example is Nokia’s Stephen Elop talking about the white elephant in the room – taking the leaked memo and reacting. [at this point the Nokia Exec looked confused – he should learn to check social media before going on stage in a panel presentation!]. It’s OK to say “We’re sorry, this is what happened, this is what we’re doing about it, and this is how you can talk to us” (e.g. when Moonfruit suffered a service outage).
MC: [On ensuring that the whole company is “on the same page” and engaging in the same way] Microsoft has mechanisms internally to alert/escalate? There are also guidelines from their legal team but ultimately it boils down to two words: Be smart.
AC: [When asked don’t leaks make organisations more secretive?] Social media is not about building stronger walls – be proud that people are interested!
MC: [On how to value an investment in employing people to take a message to a global audience in a downturn] Getting into markets in a downturn amounts to piggybacking the competition when we come out the other side… it’s valuable because all digital – conversation is out there – we’re all on social media and know the ways and means to discover, reading about other customer experiences and using to our advantage. Ultimately, it’s about driving traffic back to your own properties (from social networks back to websites) – driving sales, generating revenue – but not about locking people in to a particular platform.
AC: [On how to incorporate social media into a large organisation’s crisis/incident management strategy?] Social media provides more opportunities to expand awareness – you can apply a policy for social behaviour but everyone is a social expert in their own right. Nokia has seen sales teams creating their own profiles and starting conversations with customers – this is impossible to manage on a local level but can be encouraged. [I disagree – organisations need to make a clear separation between official and unofficial channels but I think the point here was really about global vs. local (in country) accounts.]
Until now, the conversation had felt a little business-to-consumer (B2C) focused and I was interested in social media from a business-to-business (B2B) context but, thankfully, the last question (asked by someone else) was the one I had been waiting to ask!
WT-W: [On the question of B2B vs. B2C use of social media and what works well?] The company profile is still important – the more known the company is, the more PR it gets and the Easier it is to do deals. But building real relationships is key – B2B is still conducted in person – although you can make touch points/build reputation on social media.
MC: Don’t get freaked out by low numbers – start with a plan and expect 100s, 1000s visitors not millions – B2B brands can’t compare with Starbucks or Zappos. Indeed, low numbers allow organisations to focus on their audience in a market where even one sale could be significant. Spend time on developing relationships, talking about each others’ businesses and translating to find niche relationships and make them fruitful.
AC: There is both opportunity and danger in how best to support social media in a B2B context. Social CRM is an important aspect but data is freely available and unregulated. Different countries may have different reactions and this will affect the communications strategy.
In all, this was an interesting discussion but I really felt it just scraped the surface: it was a bit light on hard advice; and concentrated more on the experience of the three organisations on the panel. Still, at least it gave me a chance to verify that the steps my own organisation is making are taking us in the right direction!
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Some time ago, I used to work for a company called Conchango (now EMC Consulting UK). It was a great place, with some really fantastic people, although I’ve lost touch with many of them in the intervening years. Even so, some of the guys and gals at Conchango EMC Consulting [sorry, can’t get used to the bland corporate name for such a creative company] pop up from time to time on the same social networks as me (online and physical) and I’d started to see reference to this thing called The Fantastic Tavern (TFT for short).
The Fantastic Tavern sounds rather magical: like platform 9¾ at London Kings Cross station; or something from a Terry Pratchett novel. In reality, it’s a meetup, in a pub, with two themes: beer and ideas. The people who come along are called Taverners and we all do something digital – whether it’s as practitioners, clients, or in agencies. The rules are simple: have a drink; exchange ideas; but no selling!
Last night’s TFT was entitled “What happens now?” and there were ten speakers, each given a few minutes to talk on their chosen topic, which was then rated on a care graph (do we care? vs. will we take action?). Don’t ask me what happens to that information later (it was my first visit) but it might have something to do with future events.
The diversity of topics was pretty wide – but they all had something in common – they were genuinely interesting!
Lee Provoost (@leeprovoost) from Headshift spoke about privacy, how it’s not a black and white issue, and the need to control the grey area in between (think about what we share with whom – and to educate our children to do the same).
Chris Thompson from Ravensbourne Hub spoke about the need to radically change the way that we look at education – with a new seamlessness between education and business.
Gabriel Hopkins (@gabehopkins) from WorldPay showed us how mobile commerce is becoming ever more significant in the world of online payments.
Cyrus Gilbert-Rolfe (@gilbertrolfe) from EMC Consulting talked about tribalism – using AFC Wimbledon as an example of of how football has very little to do with players, coaches, kit, balls and a stadium; and a lot more to do with identity, community and  a collective intent.
Chris Howell from Dixon Stores Group talked about finding “Jack” (“Jack of all trades… Master of none, sometimes better than the master of one”) – how we don’t need experts but people who can work outside their comfort zones and evolve to cope with many different roles.
My first visit to The Fantastic Tavern was enlightening – and I was genuinely in awe of the creative talents that surrounded me. I’m definitely planning on being at the next one (even though it involves schlepping into London… or New York!). Â Find more at The Fantastic Tavern site (or @TFTLondonNYC).
[Updated 14 February 2011 with full speaker names/organisations and links to presentations]
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I have to admit that I’ve tuned out a bit on the virtualisation front over the last year. It seems that some vendors are ramming VDI down our throats as the answer to everything; meanwhile others are confusing virtualisation with “the cloud”. I’m also doing less hands-on work with technology these days too and I struggle to make a business case to fly over to Redmond for the MVP Summit so I was glad when I was invited to join a call and take a look at some of the improvements Microsoft has made in Hyper-V as part of Windows Server 2008 R2 service pack 1.
Dynamic memory
There was a time when VMware criticised Microsoft for not having any Live Migration capabilities in Hyper-V but we’ve had them for a while now (since Windows Server 2008 R2). Then there’s the whole device drivers in the hypervisor vs. drivers in the parent partition argument (I prefer hardware flexibility, even if there is the occasional bit of troubleshooting required, over a monolithic hypervisor and locked-down hardware compatibility list). More recently the criticism has been directed at dynamic memory and I have to admit Microsoft didn’t help themselves with this either: first it was in the product, then it was out; and some evangelists and Product Managers said dynamic memory allocation was A Bad Thing:
“Sadly, another “me too†feature (dynamic memory) has definitely been dropped from the R2 release. I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.”
In case you’re wondering, at my notes from when this feature was dropped from Hyper-V in the R2 release candidate (it was previously demonstrated in the beta). Now that Microsoft has dynamic memory working it’s apparently A Good Thing (Microsoft’s PR works like that – bad when Microsoft doesn’t have it, right up to the point when they do…).
To be fair, it turns out Microsoft’s dynamic memory is not the same as VMware’s – it’s all about over-subscription vs. over commitment. Whereas VMware will overcommit memory and then de-duplicate to reclaim what it needs, Microsoft takes the approach of only providing each VM with enough memory to start up, monitoring performance and adding memory as required, and taking it back when applications are closed.
It turns out that, when implementing VDI solutions, disk I/O is the first problem, memory comes next, and only after that is fixed will you hit a processor bottleneck. Instead of allocating 1GB of RAM for each Windows 7 VM, Microsoft used dynamic memory with a 512MB VM (which is supported on Hyper-V). There’s no need to wait for an algorithm to compute where memory can be reclaimed – instead the minimum requirement is provided, and additional memory is allocated on demand – and Microsoft claims that other solutions rely on weakened operating system security to get to this level of density. There’s no need to tweak the hypervisor either.
Microsoft’s tests were conducted using HP and Dell servers with 96GB of RAM (the sweet spot above which larger DIMMS are required and so the infrastructure cost rises significantly). Using Dell’s reference architecture for Hyper-V R2, Microsoft managed to run the same workload on just 8 blades (instead of 12) using service pack 1 and dynamic memory, without ever exhausting server capacity or hitting the limits of unacceptable response times.
Dynamic memory reclamation uses Hyper-V/Windows’ ability to hot-add/remove memory with the system constantly monitoring itself for virtual machines under memory pressure (expanding using the configured memory buffer) or with excess memory, after which they become candidates to remove memory (not immediately in case the user restarts an application). Whilst it’s particularly useful in a VDI scenario, Microsoft say it also works well with web workloads and server operating systems, delivering a 25-50% density improvement.
More Windows 7 VMs per logical CPU
Dynamic memory is just one of the new virtualisation features in Windows Server 2008 R2 service pack 1. Another is a new support limit of 12 VMs per logical processor for exclusively Windows 7 workloads (it remains at 8 for other workloads). And Windows 7 service pack 1 includes the necessary client side components to take advantage of the server-side improvements.
RemoteFX
The other major improvement in Windows Server 2008 R2 service pack 1 is RemoteFX. This is a server-side graphics acceleration technology. Due to improvements in the Remote Desktop (RDP) protocol, now at version 7.1, Microsoft is able to provide a more efficient encode/decode pipeline, together with enhanced USB redirection including support for phones, audio, webcams, etc. – all inside an RDP session.
Most of the RemoteFX benefits apply to VDI scenarios but one part also benefits session virtualisation (previously known as Terminal Services) – that’s the RDP encode/decode pipeline which Microsoft says is a game changer.
Microsoft has always claimed that Hyper-V’s architecture makes it scalable. With no device drivers inside the hypervisor (native device drivers only exist on the parent partition) and a VMBus used for communications between virtual machines and the parent partition. Using this approach, virtual machines can now use a virtual GPU driver to provide the Direct3D or DirectX capabilities that are required for some modern applications – e.g. certain Silverlight or Internet Explorer 9 features. Using the GPU installed in the server, RemoteFX allows VMs to request content via the virtual GPU and the VMBus, render using the physical GPU and pass the results back to the VM again.
The new RemoteFX encode/decode pipeline uses a render, capture and compress (RCC) process to render on the GPU but to encode the protocol using either the GPU, CPU or an application-specific integrated circiut (ASIC). Using an ASIC is analogous to TCP offloading in that there is no work required by the CPU. There’s also a decode ASIC – so clients can use RDP 7.1 in an ultra-thin client package (a solid state ASIC) with RemoteFX decoding.
Summary
Windows 7 and Windows Server 2008 R2 service pack is mostly a rollup of hotfixes but it also delivers some major virtualisation improvements that should help Microsoft to establish itself as a credible competitor in the VDI space. Of course, the hypervisor is just one part of a complex infrastructure and Microsoft still relies on partners to provide parts of the solution – but by using products like Citrix Xen Desktop as a session broker, and tools from Appsense for user state virtualisation, it’s finally possible to deliver a credible VDI solution on the Microsoft stack.
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
You know that old saying with Windows: “wait for the first service pack”? Well, some might say that, in these days of continuous updates, it no longer applies (I would be one of those people). Even so if you are one of those people who has been holding out for the release of the first service pack for Windows 7 and Windows Server 2008 R2, it’s nearly here – and you’ve been waiting for a while now (in fact, it’s been so long I could have sworn it had already shipped!)…
Today, Microsoft will announce the release to manufacture (RTM) of Windows 7 and Windows Server 2008 R2 service pack 1 (but general availability is not until 22 February 2011). I’m told that OEMs and technology adoption program (TAP) partners will get the bits first – MSDN and TechNet subscribers will have to wait until closer to the general availability date. I’ve had no word on availability for volume license customers so I’d assume 22 February.
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
It’s not often that something as mundane as a communications protocol hits the news but last week’s exhaustion of Internet Protocol (IP) addresses has been widely covered by the UK and Irish media. Some are likening the “IPocalypse” to the Year 2000 bug. Others say it’s a non-issue. So what do CIOs need to consider in order to avoid being presented with an unexpected bill for urgent network upgrades?
The existing Internet address scheme is based on 4 billion internet protocol (IPv4) addresses, allocated in blocks to Regional Internet Registries (RIR) and eventually to individual Internet Service Providers (ISP).
A new, and largely incompatible version of the Internet Protocol (IPv6) allows for massive growth in the number of connected devices, with 340 undecillion (2^128) addresses.
All of the IPv4 addresses have now been allocated to the RIRs and at some point in the coming months, the availability of IPv4 addresses will dry up.
Even though there are huge numbers of unused addresses, they have been already been allocated to companies and academic institutions. Some have returned excess addresses voluntarily; others have not.
The important thing to remember is that the non-availability of IPv4 addresses doesn’t mean that the Internet will suddenly stop working. Essentially, new infrastructure will be built on IPv6 and we’re just entering an extended period of transition. Indeed, in Asia (especially Japan and China), IPv6 adoption is much more mature than in Europe and America.
It’s also worth noting that there are a range of technologies that mitigate the requirement for a full migration to IPv6 including Network Address Translation (NAT) and tunnels that allow hybrid networks to be created over the same physical infrastructure. Indeed, modern operating systems enable IPv6 by default so many organisations are already running IPv6 on their networks – but, whilst there are a number of security, performance and scalability improvements in IPv6, there can be negative impacts on security too if implemented badly.
Network providers are actively deploying IPv6 (as are some large organisations) but it’s likely to be another couple of years before many UK and Ireland’s enterprises consider wide-spread deployment. Ironically, the network side is relatively straightforward and the challenge is with the hardware appliances and applications. The implications for a 100% replacement are massive, however a hybrid approach is workable and will be the way IPv6 is deployed in the enterprise for many years to come.
“The move to IPv6 will take a long time – ten years plus, with hybrid networks being the reality in the interim. We are already seeing large scale adoption across the globe, particularly across Asia. Telecommunication providers have deployed backbones and this adoption is growing, enterprise customers will follow. Enterprises need to carefully consider migrations: not all devices in the network can support IPv6 today; it is not uncommon for developers to have ‘hard-coded’ IPv4 addresses and fields in applications; and there are also security implications with how hybrid network are deployed, with the potential to bypass security and firewall policies if not deployed correctly.” [John Keegan, Chief Technology Officer, Fujitsu UK and Ireland Network Solutions Division]
As for whether IPv6 is the new Y2K? I guess it is in the sense that it’s something that’s generating a lot of noise and is likely to result in a lot of work for IT departments but, ultimately it’s unlikely to result in a total infrastructure collapse.
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I’m getting increasingly tired of seeing apps launched in the US iTunes Store and not here in the UK. And the worst culprit seems to be, would you believe, Microsoft?! For example, the Bing app – freely available in the US, but not here. Sure, some of the content is US-specific but not all of it – and it’s been around in the States for ages now!
Then, I heard that OneNote has shipped on iOS. Yes, OneNote. Microsoft’s fantastic note-taking app, on iOS. Further more, it’s currently free of charge (for a limited period). Ever since I bought an iPad, I thought that OneNote would be brillaint on that platform and, as good as Evernote is, it’s just not good enough.
Launch iTunes and sign out from the local iTunes Store.
Switch to the US iTunes Store – for example attempt to buy some content that’s only available in the US (and iTunes will prompt you to switch stores) – it needs to be free content though (like a free iPhone app), in order to display a payment option of none in a later step.
Redeem the code from a US-issued iTunes gift card, creating a new iTunes account in the process. You’ll need an e-mail address that’s not already associated with iTunes and a valid address in the States. If you’re staying with friends, or at a hotel, that would work.
Select the payment type of none, then continue the process to complete the account opening process and download the content. Your new account will be credited with the value of the gift card.
I don’t know if this breaches terms and conditions on the Apple store (maybe if they were shorter, and written in plain English, I might actually read them…) but it works. As for legitimacy, I might be writing this from the United States, in which case using my hotel address and a gift card seems perfectly acceptable. At least it does right now – Apple may try an tighten things up later but what’s in it for them? This way they can sell content in multiple regions from the same customer… and I’m talking about apps here, it’s not as though I’m advocating circumvention of media distribution rights for music/video. Of course, I’m not a lawyer – and I can’t be held responsible for anyone else’s actions based on the advice in this blog post.
When you sync your device with iTunes, you will probably get an error indicating that the apps are not authorised on the computer. Simply follow the instructions to authorise the computer for use with that iTunes Store (Authorize This Computer on the Store menu in iTunes).
After doing so, the next device sync should copy the app (iTunes will have one library containing content from both the US and the local iTunes Stores) and you can freely switch back and forth between US and local iTunes accounts to make new purchases.
As it happens, OneNote Mobile for iPhone is exactly what it says – it’s an iPhone app and doesn’t make full use of the larger screen on the iPad. This is a missed opportunity for Microsoft – the best iOS apps detect the device and present an appropriate view to make full use of the display capabilities – and they could have a knock-out app running on a competitor’s platform. Hey ho.
This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
“Fit at 40” is my challenge to lose weight, get fit, and raise money to fight Prostate Cancer, the most common cancer in men.
I’m asking my friends, family, colleagues, and everyone else to sponsor me, but it’s not for a single event.
This is what I’m doing, and why…
It’s about me
At the time of writing, I’m 38 years old, about 5 stone heavier than I’d like to be (I’m about 17st 10lbs – or 113kg), and I’ve been this way for a few years now.
I know the problem: I like food; I don’t do enough exercise; and I’m greedy. But I really want this to change – with your help…
Last year (2010), I ran the Harrold Pit Run, a 4.8Km race in a village a few miles away from where I live. In a few short weeks I worked hard to get from being unable to run to the end of the street, to running my first race, and I loved the experience.
I tried to keep up the running but I’ve struggled to keep going by pounding the pavements alone. Like cutting out the poor diet, I need something to motivate me and, after I ran another 5Km race in Studland last summer, my wife came up with the idea of combining running, dieting, and fund-raising.
It’s for my Dad
On 6 May 2009, I received a phone call to tell me my Father was in hospital. I knew he was receiving treatment for Prostate Cancer, but the doctors had thought everything was under control, telling him that he was more likely to die of something else before the cancer got him…
But the doctors were wrong – the cancer had spread, Dad was undergoing chemotherapy, and things were not looking good.
Dad died in the early hours of 9 May 2009. He was only 63.
It’s for every man
Prostate cancer is the most common cancer in men.
Every year 36,000 men are told they have prostate cancer.
96% of men don’t know where the prostate is.
In the early stages of the disease, you have no symptoms.
At 50 you have a 1 in 11 chance of developing prostate cancer.
Prostate cancer kills one man every hour in the UK.
There are several charities dedicated to prostate cancer research, support, information and campaigning but I’ve chosen to support The Prostate Cancer Charity because they work across all of these areas.
It’s about helping me, remembering my Dad, and potentially benefiting every other man
By sponsoring me, you can help me stay motivated to shift that weight, get fit, and run a number of races for charity. In return, I’ll donate all of the funds raised via this website to The Prostate Cancer Charity and, because I can’t support multiple charities through JustGiving, I’ll aim to support other charities working in this field, such as The Prostate Cancer Research Centre with some of the offline donations [Update February 2012: whilst it was a nice idea to support multiple charities in this way, it’s not been practical and all funds raised to date have been sent to The Prostate Cancer Charity].
I’m asking my friends, family, colleagues, and everyone else to sponsor me but, instead of me asking you to sponsor me for a single event, I’m going to keep going until my 40th birthday, in just over a years’ time.
Each time I lose another stone (that’s just over 6kg), I’ll ask you to come back and sponsor me some more. And each time I increase the distance or run in a major race, I’ll come calling again.
I hope to run at least one 5Km race over the winter months; and I’ve entered the BUPA London 10,000 next May too (where my running partner will be Eileen Brown). We’ll see how I go with the 5Km and 10Km runs, but it would be nice to think I might make it to a half-marathon before the end of my challenge in April 2012.
If my friends, family, colleagues and online contacts could sponsor me just one or two pounds each time I either run a race or lose a chunk of weight, I’ll be able to reach my goal of raising £2000 for prostate cancer charities.
Donating through JustGiving is simple, fast and totally secure. Your details are safe – JustGiving have pledged never to sell them on or send unwanted emails. Once you donate, they’ll send your money directly to The Prostate Cancer Charity and make sure Gift Aid is reclaimed on every eligible donation by a UK taxpayer. So it’s the most efficient way to donate – I raise more money, whilst saving time and cutting costs for the charity.