Useful Links: June 2010

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Installing Ubuntu (10.4) on Windows Virtual PC

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I use a Windows 7 notebook at work but, sometimes, it’s just easier to drop back into a Unix or Linux machine – for example when I was checking out command line Twitter clients a few days ago (yes, there is a Windows one, but Twidge is more functional).  After all, as one of my friends at Microsoft reminds me, it is just an operating system after all…

Anyway, I wanted to install Ubuntu 10.4 in a virtual machine and, as I have Windows Virtual PC installed on my notebook, I didn’t want to use another virtual machine manager (most of the advice on the subject seems to suggest using Virtual Box or VMware Workstation, which is a workaround – not a solution).  My first attempts were unsuccessful but then I stumbled upon a forum thread that helped me considerably – thanks to MrDerekBush and pessimism on the Ubuntu forums – this is what I found I needed to do:

  1. Create a virtual machine in Windows Virtual PC as normal – it’s fine to use a dynamic disk – and boot from an Ubuntu disk image (i.e. an ISO, or physical media).
  2. At the language selection screen, hit Escape, then F6 and bring up the boot options string.  Delete the part that says quiet splash -- and replace it with vga=788 noreplace-paravirt (other vga boot codes may work too).
  3. Select the option to try Ubuntu without installing then, once the desktop environment is fully loaded, select the option to install Ubuntu and follow the prompts.
  4. At the end of the installation, do not restart the virtual machine – there are some changes required to the boot loader (and Ubuntu 10.4 uses GRUB2, so some of the advice on the ‘net does not apply).
  5. From Places, double click the icon that represents the virtual hard disk (probably something like 135GB file system if you have a default sized virtual hard disk). Then, open a Terminal session and type mount, to get the volume identifier.
  6. Enter the following commands:
    sudo mount -o bind /dev /media/volumeidentifier/dev
    sudo chroot /media/volumeidentifier/ /bin/bash
    mount -t proc none /proc
    nano /etc/default/grub
  7. Replace quiet splash with vga=788 and comment out the grub_hidden_timeout line (using #) in /etc/default/grub, then save the file and exit nano.
  8. Enter the following command:
    nano /etc/grub.d/10_linux
  9. In the linux_entry section, change args="$4" to args="$4 noreplace-paravirt", then save the file and exit nano.
  10. Enter the update-grub command and ignore any error messages about not being able to find the list of partitions.
  11. Shut down the virtual machine.  At this point I was left with a message about Casper resyncing snapshots and, ever after leaving the VM for a considerable period it did not progress further.  I hibernated the VM and when I resumed it, it rebooted and Ubuntu loaded as normal.

There are still a few things I need to sort out: there are no Virtual Machine Additions for Linux on Virtual PC (only for Hyper-V), which means no mouse/keyboard integration; and the Ctrl-Alt-left arrow release key combination clashes with the defaults for Intel graphics card drivers (there are some useful Virtual PC keyboard shortcuts).  Even so, getting the OS up and running is a start!

How to be an Internet private eye

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This post makes me slightly uneasy… most of the information is taken from a presentation I saw recently – so I would like to give credit to the original presenter, except that he specifically asked me not to.  The reason for this is that he’s not a lawyer, and he was worried that perhaps some of this advice may not be legal in certain jurisdictions.  I’m not a lawyer either, so I’ll make a statement up front: I think the activities suggested in this post are legal in the UK (where I live), but I’m not qualified to give advice on this.  Before carrying out any of the actions in this post, it may be advisable to check the legal situation in the country where you live (and/or where the websites you are checking out are hosted).  I can not be held responsible for any actions taken by others based on the advice I have published here and my sole purpose in publishing this information is to share what may be useful to others when trying to protect their personal or professional identity online… in short, I am aiming to do the right thing here…

Your identity (whether it’s personal, or a corporate brand) is precious.  Sometimes, unscrupulous individuals, or those who may have a grudge against you, may impersonate you or your brand online.  When that happens, it can be useful to know a little more about who is using your identity as you attempt to reclaim it. Hopefully some of these suggestions will be useful in tracking down who is using your identity, whether it’s to send unsolicited e-mails, to (mis-) use your brand or trademark online, or just to get some idea of your own online footprint.

It can be quite interesting to understand your Internet footprint – and automated tools such as RapLeaf can be used to see the social profile for given e-mail address(es) on a number of popular sites across the web.  Companies can find out about their customers, but individuals can check their details too – I was surprised to find when I logged in that it had already identified me on Flickr and WordPress previously (suggesting that one of RapLeaf’s customer had already run a search on me)… it’s far from complete but may provide a few more clues about who someone is (or highlight to you the information that you publish online). Even more of an eye-opener was Gist which, once supplied with my public Facebook and Twitter accounts, found a huge amount of information about me from a variety of online sources and most of it was accurate (it had linked me to my employer’s sister company – probably because that was the information it gained on me from one of my contacts).

The next tool that may be useful Open Site Explorer.  This link popularity checker and backlink analysis tool can be used to understand where links to a given URL originate from, including the URL’s page authority, domain authority, linking domains and total links. So, if you find an anonymous blog, it will show where links to that blog – which may provide a clue as to whose site it is (i.e. an anonymous blogger may also have other online personas).

If you want to find something on the ‘net, Google is your friend: by searching for snippets of text, comments, etc. it’s possible to identify the original source of an item.  And Google’s cache is a goldmine – even after a website has been taken offline, its contents may well still exist in the Google cache!

Sites like Knowem can be used to see who is using a particular name (or trademark) on a variety of sites across the Web – that can be useful if you want to protect your brand.

IP tools can provide all sorts of information for would-be Internet sleuths. Many are just standard Unix tools, exposed via a website and not everything can be relied on (for example my IP address belongs to my ISP, who are several hundred miles away, but they know who I am if I’ve been up to no good). Domain tools information can provide a detailed site profile as well as whois information including reverse IP lookups to understand who else shares my server (noting that they may or may not be affiliated in some way).  You can also find out which sites share a given IP address using a decision engine such as Bing.  Try searching for ip:ipaddress to see all of the sites at a given address.

E-mail headers can be useful to find out where an e-mail originated (or which servers it passed through).  In Microsoft Outlook, view the message headers or, in Google Mail, select Show Original.  The resulting information (IP addresses, etc.) can be fed into some of the IP tools (e.g. traceroute or whois) to find out more about the message – e.g. to track down a spammer (and block them!).

Of course, if you wanted to find out who someone was, you could send them an e-mail and try and trap them using the same techniques that the phishers use… that wouldn’t be a good idea – it’s almost certainly illegal, and I’m not condoning it – indeed, the only reason I mention it here is to say “don’t do it”.

One more clue as to who is watching you online (unfortunately not free, but potentially useful when tracking down an impersonator) is a dashboard called Trovus, which can be used to build a profile of who accesses your website and from where.

If you discover that your identity is being used inappropriately, the first thing to do is to contact the relevant service providers (perhaps a hosting company for a website or mail server, or maybe a public website) and, even though you may not see a response, they may be taking action that’s not visible to you (e.g. offline, via another medium, or using lawyers) – hopefully you’ll at least get a response to say “thanks, we’ll be in touch”.  Whilst the actions in this post may not provide all the answers on who is impersonating you, they are at least the first steps to allow you to contact the appropriate organisations for further assistance.

Lies, damn lies, and Apple marketing

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier today I retweeted The Guardian’s technology editor, Charles Arthur’s tweet about a Sophos blog post highlighting an undocumented change to Mac OS X, that appears to guard against a particular malware exploit.

The response I got was accusation of having a half-empty iGlass and being an iHater. To be fair, the “accuser” was a friend of mine, and the comments were probably tongue in cheek (maybe not, based on the number of follow-up tweets…) but I was sure I’d read something on the Apple website about Macs not getting viruses, so I had a quick look…

Here is a quote from the Apple website, on why you’ll love a Mac:

It doesn’t get PC viruses.
A Mac isn’t susceptible to the thousands of viruses plaguing Windows-based computers. That’s thanks to built-in defenses in Mac OS X that keep you safe, without any work on your part.”

Of course not – Macs (which these days have almost nothing, other than design aesthetics and operating system to distinguish them from any other PC – i.e. a personal computer running Windows, Linux or something else) don’t get the same viruses as Windows machines.  No, they have their own “special” sort of (admittedly rare) malware, that Apple is fortunate enough to be able to patch within the operating system.  That will be the “built-in defenses” (sic) they talk about then.  So why not be transparent and mention them in the release notes for the updates?

That’s the big text… then we get:

Safeguard your data. By doing nothing.
With virtually no effort on your part, Mac OS X defends against viruses and other malicious applications, or malware. For example, it thwarts hackers through a technique called “sandboxing” — restricting what actions programs can perform on your Mac, what files they can access, and what other programs they can launch. Other automatic security features include Library Randomization, which prevents malicious commands from finding their targets, and Execute Disable, which protects the memory in your Mac from attacks.

Download with peace of mind.
Innocent-looking files downloaded over the Internet may contain dangerous malware in disguise. That’s why files you download using Safari, Mail, and iChat are screened to determine if they contain applications. If they do, Mac OS X alerts you, then warns you the first time you open one.

Stay up to date, automatically.
When a potential security threat arises, Apple responds quickly by providing software updates and security enhancements you can download automatically and install with a click. So you’re not tasked with tracking down updates yourself and installing all of them one by one.

Protect what’s important.
Mac OS X makes it easy to stay safe online, whether you’re checking your bank account, sending confidential email, or sharing files with friends and coworkers. Features such as Password Assistant help you lock out identity thieves who are after personal data, while built-in encryption technologies protect your private information and communications. Safari also uses antiphishing technology to protect you from fraudulent websites. If you visit a suspicious site, Safari disables the page and displays an alert warning you about its suspect nature.

As a parent, you want your kids to have a safe and happy experience on the computer. Mac OS X keeps an eye out even when you can’t. With a simple setup in Parental Controls preferences, you can manage, monitor, and control the time your kids spend on the Mac, the sites they visit, and the people they chat with.”

Now, to be fair to Apple, with the exception of the bit about viruses (and let’s put aside the point that viruses are only one potential form of malware), they don’t suggest that they are unique in any of this… but the page does infer this, and talks about how Macs are built on the world’s most advanced operating system (really?). So let’s take a look at Apple’s bold claims:

  • Safeguard your data by doing nothing.  “Sandboxing” – Windows has that too.  It prevents malicious applications from accessing sensitive areas of the file system and the registry using something called User Access Control (UAC).  You may have heard about it – generally from people getting upset because their badly-written legacy applications didn’t work with Windows Vista.  Thankfully, these days things are much better.  And I’m sure my developer colleagues could comment on the various sandboxes that .NET and Java applications use – I can’t, so I won’t, but let’s just say OS X is not alone in this regard.
  • Download with peace of mind.  Internet Explorer warns me when I attempt to download an application from a website too.  And recent versions of Windows and Office recognise when a file has originated from the Internet.  I have to admit that the Safari/OS X solution is more elegant – but, if Macs don’t get viruses, why would I care?
  • Stay up to date, automatically.  Windows has Automatic Updates – and the update cycle is predictable: Once a month, generally, on the second Tuesday; with lots of options for whether to apply updates automatically, to download and notify, or just to notify.  Of course, if you want to patch the OS manually, then you can – but why would you start “tracking down updates yourself and installing all of them one by one”?
  • Protect what’s important.  I’ll admit that Windows doesn’t have a password manager but it does have all the rest of the features Apple mentions: encryption (check); anti-phishing (check); warnings of malicious websites (check); parental controls (check).

I’m sure that a Linux user could list similar functionality – Apple is not unique – this is run-of-the-mill stuff that any modern operating system should include.  The trouble is that many people are still comparing against Windows XP – an operating system that’s approaching its tenth anniversary, rather than any of the improvements in Vista (yes, there were many – even if they were not universally adored) and 7.

So, back to the point:

“@markwilsonit Seriously? We needed confirmation?! Apple often patches security holes. Your iGlass is still half empty, then? #ihater

[@alexcoles on Twitter, 18 June 2010]

Patching security holes in software (e.g. a potential buffer overflow attack) is not the same as writing signature code to address specific malware.  I’m not an iHater: I think it’s good that Apple is writing AV signatures in their OS – I’d just like them to be more open about it; and, as for the criticism that I don’t write much that’s positive about Apple, I see it as having an ability to see past the Steve Jobs Reality Distortion Field and to apply my technical knowledge to look at what’s really there underneath the glossy exterior.

I should add that I own two Macs, three iPods and a iPhone (I also owned another iPhone previously) and hope to soon have the use of an iPad. In general, I like my Apple products – but they’re far from perfect, despite what the fanboys and Apple’s own marketing machine might suggest.

Do it for Dad

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The Prostate Cancer CharityLast March, I blogged about The Prostate Cancer Charity’s Prostate Cancer Awareness Month.  Whilst my support is for personal reasons, it’s worth highlighting this disease (and supporting research) because it’s the most common cancer in men and it still kills one man in the UK every hour: find out more at hiddencancer.org.uk.

With Father’s Day just around the corner (at least, it is in the UK – I think it varies around the world), why not join in with The Prostate Cancer Charity’s “Do It For Dad” campaign?

You can send your Dad a Father’s Day card whilst supporting Dads (and all other men) who have been affected by Prostate Cancer and continuing to fund research by making a donation to The Prostate Cancer Charity.

(markwilson.it is not affiliated with The Prostate Cancer Charity; however I do support its activities and invite readers of this blog to do so too)

Connecting an E.ON EnergyFit Monitor to Google PowerMeter

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The energy company that I buy my electricity and gas from (E.ON) is currently running an “EnergyFit” promotion where they will send you an energy monitor for free. A free gadget sounded like a good idea (I have a monitor but it’s the plugin type – this monitors the whole house) so I applied for one to help me reduce our family’s spiraling energy costs (and, ahem, to help us reduce our environmental footprint).

The EnergyFit package appeared on my doorstep sometime over the weekend and setup was remarkably easy – there’s a transmission unit that loops around the main electricity supply cable (without any need for an electrician) and a DC-powered monitor that connects to this using a wireless technology called C2, which works on the 433MHz spectrum (not the 2.4GHz that DECT phones, some Wi-Fi networks, baby monitors, etc. use).  Within a few minutes of following E.ON’s instructions, I had the monitor set up and recording our electricity usage.

The monitor is supplied with E.ON’s software to help track electricity usage over time and it seems to work well – as long as you download the data from the monitor (using the supplied USB cable) every 30 days (that’s the limit of the monitor’s internal memory).

I wondered if I could get this working with Google PowerMeter too (Microsoft Hohm is not currently available in the UK) and, sure enough, I did.  This is what I had to do:

  1. Head over to the Google PowerMeter website.
  2. Click the link to Get Google PowerMeter.
  3. At this point you can either sign up with a utility company, or select a device.  The E.ON-supplied device that I have is actually from a company called Current Cost so I selected them from the device list and clicked through to their website.
  4. Once on the Current Cost website, click the button to check that your device will work with Google PowerMeter.
  5. The E.ON EnergyFit monitor is an Envi device – click the Activate button.
  6. Complete the registration form in order to download the software required to connect the monitor to Google.
  7. Install the software, with includes a registration process with Google for an authorisation key that is used for device connection.
  8. After 10 minutes of data upload, you should start to see your energy usage appear on the Google PowerMeter website.

Of course, these instructions work today but either the Google or Current Cost websites are subject to change – I can’t help out if they do but you should find the information you need here.

There are some gotchas to be aware of:

  • The monitor doesn’t keep time very well (mine has drifted about 3 minutes a day!).
  • Configuring the monitor (and downloading data to the E.ON software) requires some arcane keypress combinations.
  • According to the release notes supplied with the Current Cost software, it only caches data for 2 hours so, if your PC is switched off (perhaps to save energy!), Google fills in the gaps (whereas the E.ON Energy Fit software can download up to 30 days of information stored in the monitor).
  • You can’t run both the E.ON EnergyFit and the Current Cost Google PowerMeter applications at the same time – only one can be connected to the monitor.

If your energy company doesn’t supply power monitors, then there are a variety of options for purchase on the Google PowerMeter website.

Enabling Aero glass in a Windows 7 Virtual Machine

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The notebook PC that I use most days runs Windows 7 Enterprise Edition (x64) and I have a virtual machine running the 32-bit version of Windows 7 Ultimate Edition for things that I don’t want on my work PC (installing personal software, etc. that would otherwise break organisational security policies).  Incidentally, the reasons this virtual machine runs a 32-bit version of Windows are that Virtual PC does not support 64-bit guests, Microsoft does not have a client-side hypervisor and Citrix XenClient will not install on my machine (I have VT-x enabled but I can’t enable VT-d in the BIOS).  Of course, I could use VirtualBox or VMware Workstation to run 64-bit guests but I already have Virtual PC installed for Windows 7 “XP Mode” and there’s no reason to run yet another virtual machine manager.

The host system has Windows 3D graphics effects (Aero) enabled, but the guest did not seem to be recognising them, even after I installed the Virtual PC Integration Components and restarted.  This gave me some decent choices for display settings, and my mouse could move freely between the guest and host operating systems, but there the graphics were plain and dull, even after selecting an Aero-enabled theme.

The trick (as highlighted by Redmond Pie) is to select the option to Enable Integration Features from Windows Virtual PC’s Tools menu.  In order to do this, I needed to supply some credentials and, because I was not running as Administrator (nor should anyone be on Windows 7), I needed to add the relevant user to the guest’s Remote Desktop Users group first (don’t be confused by the message suggesting that Remote Desktop requires a firewall exception – it does, but Virtual PC’s integration features do not).

Once my account had been given the necessary permissions and integration features enabled, my virtual machine was able to make full use of the graphics capabilities provided by the host PC – including “Aero glass”.

Forcibly deleting remnants of a Windows user profile

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, I come across an issue that takes up far more of my time than it should do… and, this afternoon, that’s exactly what happened so I’m posting the details here for anyone else in the same situation.

I’d installed a new copy of Windows in a virtual machine and, because it wasn’t domain-joined as part of the setup, the out of the box settings made me create a local user account that I didn’t need.  After joining the domain and logging in as an administrator, I deleted the account and the profile, but was given a warning that not all files were removed.  I checked and, sure enough, the folder for the account (called Mark) was still there in C:\Users.  All of the usual attempts to remove it failed, regardless of what I did with permissions until I found a post on the My Digital Life blog, titled Delete Undeletable Files in Windows Vista.  I was running Windows 7 but this advice for Vista sounded hopeful, so I ran the following commands against the folder that was causing me grief:

takeown /f foldername /r /d y
icacls foldername /grant administrators:F /t

(for files, I would not have needed the /r switch on the takeown command to recurse the folder structure.)

I was still having trouble deleting the folder from Windows Explorer; however these commands had given me the clue I needed (and answered why Explorer told me that the location was shared, but it didn’t show up in the list of shares…) – the AppData hidden folder was still there.

Using the command line, I navigated C:\Users\Mark\AppData and its two trees (Local and Roaming) to remove around 10 files and folders, after which I was able to successfully remove the C:\Users\Mark folder.

With that out of the way, I could log in with my domain account (also called Mark) and its profile was created at C:\Users\Mark instead of C:\Users\Mark.domainname.

Why CEOs don’t blog/tweet

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I tweeted about a Harvard Business Review article published by Bloomberg that asks “Is the typical CIO a ‘Gear Guy?'” and it reminded me to post something about Rob Shimmin‘s talk on “Why CEOs don’t tweet” at last month’s Dell B2B Social Media Huddle (#dellb2b).

It was a fascinating talk and I hope Rob won’t mind me sharing the key points in this post.

Setting the scene

  • Very few CEOs blog or tweet (and even fewer in the B2B world).
  • All CEOs are good communicators but their skills may vary according to where/how they are communicating (face to face, auditorium, etc.)  Your CEO may be nervous in front of some audiences – so look at them and see where they fit.

Should a CEO communicate using social media?

  • Maybe not? Some CEOs have unbelievable restrictions on what they can say and their message may be scrubbed clean until the point where the content is not that useful.  One example is Jonathan Schwartz (former CEO of Sun Microsystems), whose current blog is titled “What I couldn’t say…”.
  • The primary reason that CEOs don’t blog or tweet is time. Use of social media needs to be transparent and, if a blog is ghost written, then it’s important to say so.  Rob spoke of how some CEOs sits down with the guys who write their posts at 8am each day and tell them what they are thinking.  Others want to write for themselves but it’s difficult to switch someone from “command and control” mode to “talking through a keyboard” (unfiltered).
  • Permanence is another consideration (the United States Library of Congress is cataloging all tweets) – think about what happens when two organisations are competing and they later merge – those blog posts and tweets are still there for all to see their previous history of conflict!
  • CEOs may also have some restrictions around what can say (for example as a result of regulation, fair disclosure).
  • For B2B CEOs, control is a big issue – it’s difficult for them to let people say what they like in social media!  Lawyers and HR may also have a view.  Then consider that social media reaches a wide audience and impacts buying patterns… whereas B2B CEOs worry about a small number of important contacts.
  • All CEOs are interested in recruiting great talent and getting message to staff in event of crisis.  Often, organisations are collaborating with other partners on a product or service – they can’t be completely open but there is a need to collaborate externally [and to communicate internally].
  • B2B and B2C communications are closer than many might think. Rob’s example was that, if British Airways’ engine supplier is “volcanic ash friendly”, that could impact on a consumer’s airline choice (i.e. B2B becomes B2C when the public are interested). And even if there is no risk (i.e. that all jet engines are equally ash friendly, or not!), there may be a perception of risk by the public – again, B2B organisations need to think about what the public thinks as a B2C organisation would.  In essence, it’s important to think “if my customer’s customer is interested in something, what am I doing to address it?”

Triggers for B2B social media communications

  • Crisis (fixing negative PR [BP must surely be upset about the @BPGlobalPR Twitter account!]).
  • Competition (sounding knowledgeable on a topic that people care about (they have a good approach to xyz… should we have?).
  • Cost effectiveness (i.e. – look at the reach of various social networking platforms – although it’s important to consider richness, not just reach).
  • Powerful channel (recognising that social media can play in important role in communicating with both customers and employees).

Using social media for B2B crisis management

“Seeing a CEO grapple with social media can be a bit like seeing your Dad dance at your wedding with baseball camp on backwards!”

[source unknown]

  • A crisis isn’t the time to launch a social media presence – the CEO’s message can be passed out through existing channels and other (often younger) more, technologically-savvy people in the organisation can get the CEO to comment through their blogs.
  • In a crisis, suddenly everything is watched and old, previously uninteresting content becomes interesting (so, because of permanence, it’s important to future-proof the message).
  • Some CEOs will take well to social media, whilst others are not so comfortable – it’s important to play to your CEO’s strong points.

Social Media is on the B2B RADAR

  • Social media can have a negative effect too – in the recent British Airways strike negotiations, a senior Union leader provoked controversy by tweeting from the negotiation table (and then compounded the issue by tweeting as he enjoyed himself at a football match, whilst passengers were grounded by strikes).
  • CEOs make mistakes like the rest of us but, if they have a good setup around them, they can survive; however CEOs are less likely to survive contention (particularly if old content is surfaced later) than “Jake from marketing aged 24” is when he tweets about suffering from a hangover.
  • CEO use of social media should be about: earning trust; having an industry voice (building communities); monitoring issues (getting ready to react to consumer’s needs and concerns); talking to employees (listening too); driving innovation (encouraging idea sharing); and recruiting talent (leveraging connections).

The top 10 challenges for Heads of Digital Communications (HDCs)

  1. Lack of understanding
  2. Loss of control
  3. Demographic apartheid
  4. Fragmentation of media
  5. Speed of change and response
  6. Rules of engagement
  7. Privacy and corporate security
  8. Finding good people
  9. Lack of effective metrics
  10. Ownership of digital

(From Watson Helsby Executive Search’s “Digital Communications and Social Media: the challenges facing the PR industry”) – more quotes include:

“Under 30’s are the digital natives – but they lack the all-round communications skills, gravitas and credibility.”

“Digital communications is a destabilising force in a bureaucratic environment.”

“38% of HDCs were in favour of a total ban on social media in the office.”

Cornerstones for CEO communications

  • Consider all audiences – you can no longer speak to just one.
  • Think before you speak – consider the “New York Times Test” (never write down anything you would be uncomfortable seeing in tomorrow’s New York Times).
  • Consider content rather than tone – strip away any negative tones and focus on the issue.
  • Scope – decide early what’s in and out.
  • Know your influencers – who must be reached in a crisis.
  • Be honest, open and transparent.

Credits

This post is based almost entirely on the presentation that Rob Shimmin (@robshimmin) gave at the Dell B2B Social Media Huddle – the original deck is on Slideshare.

Incidentally, I notice that, when Rob’s slides were uploaded to Slideshare, the title was changed to “Why CIOs don’t tweet” – that would be an entirely different discussion…

The importance of getting your images online early

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last Saturday, I spent a wonderful afternoon and evening at a friend’s wedding.  As usual, I had my camera with me and, as usual, Mrs. Wilson understood when I kept dashing off to take yet another photo.

When I saw the official photographer pointing a camera in my direction, I joked that I take the pictures and don’t appear in them (at least not if I can help it) but I was totally unprepared for what came when she saw me using my medium telephoto (70-200mm f2.8) lens to take a shot of the Groom and Best Man - she rushed up and asked me about the gear I was using and, although I don’t remember it, others who were there later said she asked if I was a professional.  The daft thing is that gear doesn’t matter – so I was using a Nikon D700 and she was using a D90 - the D700 is weighty and, if you prefer a lighter camera (or want to shoot video) then the D90 might be quite a good choice.  I’m sure that she took much better photos than me because: a) I was shooting in Program mode with auto focus (so the camera was doing the work not me); b) I consumed a significant volume of wine during the course of the afternoon.

The best part of it though, was that I was there to enjoy myself, so I didn’t have any of the pressures of being an “official” photographer – organising people and needing to make sure that every shot was spot on, because there are no second chances at shooting a wedding.

This was the first wedding I’ve been to in a few years (pretty much since the switch from film to digital) and I thought it was brilliant to be given details on the day of where to go to view the official pictures.  I was surprised though to see that it said “images take approx 2 weeks” as that seems a long time to get some digital images online (even with post-processing) and others around me thought perhaps that was the time it takes for fulfillment of orders.  Well, it’s now more than 48 hours since the photographers left the venue and, the official site says that “photos have not been uploaded yet, please check back soon…” so I guess it really could be a while until the pictures go up there.

I remember from my own wedding how pleased we were to see a few prints before we went on honeymoon – the official ones took a while but that was because they were negatives: there were several hundred 35mm images and a load more medium format ones to be processed and printed.  Back then, the few digital images we had were not that great (over-sharpened JPEGs at around 3 megapixels with over-saturated colours) but even consumer cameras create 10 or 12 megapixel images today and the in-camera processing has got a lot better (as has the availability of affordable software for post-processing).  Maybe the official photographer is waiting for the Bride and Groom to return from honeymoon before releasing the images but, in these days of social networking, Facebook and Flickr have potentially taken away some of the her image sales because friends and family have already shared their pictures from the day.

I know that, technically, my shots were far from spot on: I should have paid more attention to the aperture I used on some of them, for example, and I should have used a longer lens for the wedding speeches (by then, the 70-200mm zoom was back in the car and I was using a 24-85mm zoom) but I was really, really pleased with a message I received tonight praising my pictures (from the Bride’s mother, no less).  As I said earlier, I didn’t have any of the pressures of being an “official” photographer - and I’m sure the official images will be fantastic when we see them.

I guess what I’m saying is that I’m surprised that professional wedding photographers don’t try harder to get their images online before the amateurs get in there. Within 24 hours, I saw three online albums from friends and family – and there were some great images.  Professional photographers work hard to make a living – this one has a great portfolio on her website and some very reasonable prices too – it seems crazy to throw away image sales by missing out on the guests’ post-wedding excitement.