After several years of monthly “useful links” posts, I’ve decided that this will be the last one – the plugin I use to read from my Delicious account and generate the post stopped working a few months ago, and the useful links can also be found directly (on my Delicious feed) or via Twitter (@markwilsonit, prefixed [delicious])
6.30am, sometime over the Christmas holidays and, after being woken by one of our sons, my wife informs me that there’s a strange noise coming from one of the computers in the office… bleary-eyed, I stumble to my desk and shut down the machine, before returning to my slumber.
Thankfully, it was just a noisy fan, not (yet) another hard disk failure but it did require attention, which involved me learning a little bit more than I should need to about the innards of a PC… so I’m blogging the key points for future reference.
I initially swapped the case fan for one I picked up from Maplin (I could get cheaper online, but not once I’d taken into account shipping for such a small item) but found it was the one on the Intel D945GCLF2 board that was making most of the noise. So I put the Maplin unit there instead (it’s not the CPU that needs cooling, but the inefficient Northbridge/GPU that accounts for most of the power consumption on this board – the Atom 330 is only using about 8W and is passively cooled.
Unfortunately the screws that fixed the OEM fan to the heatsink wouldn’t fit the replacement, so I used a piece of plastic-coated wire instead to poke through the holes and twist tight – it’s functional at least.
With the case fan also making a racket now, I found that it only did so when sucking air into the case (the fan seems to brush on the case when attached). I’d assumed that a fan on the bottom of a case should bring in cold air and with hot air rising to the holes on the top of the case. So I flipped the fan over (I’m not sure which way it was originally pointed) so it’s now blowing air out of the bottom (it’s the only place to fix a fan). Fingers crossed, it’s doing something… monitoring with Open Hardware Monitor tells me my CPU is fine but SpeedFan suggests something else is running a little warm!
I’m fortunate enough to live in a pleasant market town which generally has a low crime rate. Unfortunately, recent months have seen a significant increase in the number of burglaries and, with Thames Valley Police seemingly mystified as to who the culprits are (other than suspecting that they are coming in “across the border from Northamptonshire”!), I started to look into ways to increase the security of our home.
Of course, if someone wants to get into your house they will find a way but the advice we’ve been given can be paraphrased as “make sure your house is less attractive than the alternative” and, although I already have several security measures in place, an extra security light (with PIR) on the front drive was an inexpensive modification (and also quite handy when arriving home in the dark).
It took me a few hours, and the hardest part was getting cable clips to attach to the blockwork/mortar that makes up the interior walls of our garage but I got there in the end. For a description of the electrical changes, there’s some good advice on the ‘net, like the description of the project at lets-do-diy.com. Unfortunately, there’s also a fair bit of scaremongering out there – this post on the IET forums is a great example, with one user asking if the person asking the question is qualified, highlighting that a circuit could be overloaded and others saying that any circuit can be overloaded, but that’s the point of adding a fuse where the rating of the cable changes! Others point out that there are also degrees of experience and that qualification has very little to do with competence. From my perspective it’s good to see that electricians are no different to us IT bods – still dealing with the fallout from bodged DIY jobs and squabbling over the value of certifications over experience!
A few weeks ago, I bought my first flat screen TV. The old (c1998) Sony Trinitron still works, but it was starting to lose the colour a little around the edges and was, frankly, taking up a huge chunk of living room so I splashed out and bought a Samsung UE37ES6300 from John Lewis.
I’m not bothered about 3D pictures but the Smart TV (Internet-connected) functionality is a huge bonus. Meanwhile, the availability of HDMI ports (no VGA on this year’s model) led me to hook up my old Mac Mini as a permanently connected place for Internet access in the living room (although the requirement is rapidly dropping as more and more Samsung Apps become available – Spotify appeared last night!).
Using a DVI to HDMI cable, the Mac was able to detect the 1080p display but it did enable overscan which meant I was losing the edge of the picture. Turning off overscan helped, but didn’t use the whole display (and was also a bit fuzzy). With a bit of help from a friend (who, conincidentally, had come over and hooked his Linux machine up to the display), I worked out that the solution is to leave overscan enabled on the computer but to set the TV Picture Size to Screen Fit. I’m not sure if I can see much difference betwen 50Hz PAL and 60Hz NTSC but, seeing as this is a European model, I left the computer set to 50Hz PAL.
This resolved the display size but it was still not as sharp as I would expect for a native resolution display. Switching the Picture Mode from Standard to Movie made a big difference (although the colours were a little muted and there was a slight magenta cast) so I started to look at the differences between the two profiles. Now I’ve tweaked the Standard profile to bring down the sharpness from the default of 50 to 20 and turned off the Dynamic Contrast in the TV’s Advanced Settings and I think I’m pretty much there.
So, there you have it. I haven’t tried a Windows PC yet, but those settings seem to work well with the Mac – and the result is a much improved digital display output.
Adobe Flash has no place in the modern web. Unfortunately there are many sites that still use it, so it can’t be ignored entirely. This weekend I found I had no sound in my browser and it turned out to be Flash-related. This is what I found…
No sound in Google Chrome
Over the week, I tend to accumulate open browser tabs of things that look interesting but which I haven’t got time to read/watch in the working day. Written content is simple enough (it gets saved to Pocket, and then not read from there instead), videos are less straightforward.
Anyway, I’d finally got round to watching a video link I’d been sent and found that I had no sound. Strange. Windows sound was working – I could test from Control Panel and in other apps – it seemed to be a problem for YouTube in my browser (Google Chrome).
Of course, being SharePoint (well, on 2007 at least), I couldn’t use an alternative browser but I was pretty sure the issue was related to the HTML generated an placed in a calculated column in my list. By creating a new view that excluded the problematic column (i.e. the one containing the HTML), I was able to edit as normal, without a browser crash.
What a crazy week. On top of a busy work schedule, I’ve also found myself at some tech events that really deserve a full write-up but, for now, will have to make do with a summary…
Amazon Web Services 101
One of the events I attended this week was a “lunch and learn” session to give an introduction/overview of Amazon Web Services – kind of like a breakfast briefing, but at a more sociable hour of the day!
Contrary to popular belief, AWS didn’t grow out of spare capacity in the retail business but in building a service-oriented infrastructure for a scalable development environment to initially provide development services to internal teams and then to expose the amazon catalogue as a web service. Over time, Amazon found that developers were hungry for more and they moved towards the AWS mission to:
“Enable business and developers to use web services* to build scalable, sophisticated applications”
*What people now call “the cloud”
In fact, far from being the catalyst for AWS, Amazon’s retail business is just another AWS customer.
Adobe Marketing Cloud
Most people will be familiar with Adobe for their design and print products, whether that’s Photoshop, Lightroom, or a humble PDF reader. I was invited to attend an event earlier this week to hear about the Adobe Marketing Cloud, which aims to become for marketers what the Creative Suite has for design professionals. Whilst the use of “cloud” grates with me as a blatant abuse of a buzzword (if I’m generous, I suppose it is a SaaS suite of products…), Adobe has been acquiring companies (I think I heard $3bn mentioned as the total cost) and integrating technology to create a set of analytics, social, advertising, targeting and web experience management solutions and a real-time dashboard.
Milton Keynes Geek Night
The third event I attended this week was the quarterly Milton Keynes Geek Night (this was the third one) – and this did not disappoint – it was well up to the standard I’ve come to expect from David Hughes (@DavidHughes) and Richard Wiggins (@RichardWiggins).
The evening kicked off with Dave Addey (@DaveAddey) of UK Train Times app fame, talking about what makes a good mobile app. Starting out from a 2010 Sunday Times article about the app gold rush, Dave explained why few people become smartphone app millionaires, but how to see if your idea is:
Is your mobile app idea really a good idea? (i.e. is it universal, is it international, and does it have lasting appeal – or, put bluntly, will you sell enough copies to make it worthwhile?)
Is it suitable to become a mobile app? (will it fill “dead time”, does it know where you go and use that to add value, is it “always there”, does it have ongoing use)
And how should you make it? (cross platform framework, native app, HTML, or hybrid?)
Dave’s talk warrants a blog post of it’s own – and hopefully I’ll return to the subject one day – but, for now, that’s the highlights.
Next up were the 5 minute talks, with Matt Clements (@MattClementsUK) talking about empowering business with APIs to:
Increase sales by driving traffic.
Improve your brand awareness by working with others.
Increase innovation, by allowing others to interface with your platform.
Create partnerships, with symbiotic relationships to develop complimentary products.
Create satisfied customers – by focusing on the part you’re good at, and let others build on it with their expertise.
Then Adam Onishi (@OnishiWeb) gave a personal, and honest, talk about burnout, it’s effects, recognising the problem, and learning to deal with it.
And Jo Lankester (@JoSnow) talked about real-world responsive design and the lessons she has learned:
Improve the process – collaborate from the outset.
Don’t forget who you’re designing for – consider the users, in which context they will use a feature, and how they will use it.
Learn to let go – not everything can be perfect.
Then, there were the usual one-minute slots from sponsors and others with a quick message, before the second keynote – from Aral Balkan (@Aral), talking about the high cost of free.
In an entertaining talk, loaded with sarcasm, profanity (used to good effect) but, most of all, intelligent insight, Aral explained the various business models we follow in the world of consumer technology:
Free – with consequential loss of privacy.
Paid – with consequential loss of audience (i.e. niche) and user experience.
Open – with consequential loss of good user experience, and a propensity to allow OEMs and operators to mess things up.
This was another talk that warrants a blog post of its own (although I’m told the session audio was recorded – so hopefully I’ll be able to put up a link soon) but Aral moved on to talk about a real alternative with mainstream consumer appeal that happens to be open. To achieve this, Aral says we need a revolution in open source culture in that open source and great user experience do not have to be mutually exclusive. We must bring design thinking to open source. Design-led open source. Without this, Aral says, we don’t have an alternative to Twitter, Facebook, whatever-the-next-big-platform-is doing what they want to with our data. And that alternative needs to be open. Because if it’s just free, the cost is too high.
The next MK Geek Night will be on 21 March, and the date is already in my diary (just waiting for the Eventbrite notice!)
Photo credit: David Hughes, on Flickr. Used with permission.
One thing I particularly appreciate about Ryan’s presentations is that he approaches things from an architectural view. It’s a refreshing change from the evangelists I’ve met at other companies who generally market software by talking about features (maybe even with some design considerations/best practice or coding snippets) but rarely seem to mention reference architectures or architectural patterns.
During his presentation, Ryan presented a reference architecture for utility computing and, even though this version relates to AWS services, it’s a pretty good model for re-use (in fact, the beauty of such a reference architecture is that the contents of each box could be swapped out for other components, without affecting the overall approach – maybe I should revisit this post and slot in the Windows Azure components!).
So, what’s in each of these boxes?
AWS global infrastructure: consists of regions to collate facilities, with availability zones that are physically separated, and edge locations (e.g. for content distribution).
Networking: Amazon provides Direct Connect (dedicated connection to AWS cloud) to integrate with existing assets over VPN Connections and Virtual Private Clouds (your own slice of networking inside EC2), together with Route 53 (a highly available and scalable global DNS service).
Compute: Amazon’s Elastic Compute Cloud (EC2) allows for the creation of instances (Linux or Windows) to use as you like, based on a range of instance types, with different pricing – to scale up and down, even auto-scaling; Elastic Load Balancing allows the distribution of EC2 workloads across instances in multiple availability zones.
Storage: Simple Storage Service (S3) is the main storage service (Dropbox, Spotify and others runs in this) – designed for write once read many applications. Elastic Block Store (EBS) can be used to provide persistent storage behind an EC2 instance (e.g. boot volume) and supports snapshotting, replicated within an availability zone (so no need to RAID). There’s also Glacier for long term archival of data, AWS Import/Export for bulk uploads/downloads to/from AWS and the AWS Storage Gateway to connect on-premises and cloud-based storage.
Databases: Amazon’s Relational Database Service (RDS) provides database as a service capabilities (MySQL, Oracle, or Microsoft SQL Server). There’s also DynamoDB – a provisioned throughput NoSQL database for fast, predictable performance (fully distributed and fault tolerant) and SimpleDB for smaller NoSQL datasets.
Application services: Simple Queue Service (SQS) for reliable, scalable, messages queuing for application decoupling); Simple Workflow Service (SWF) to coordinate processing steps across applications and to integrate AWS and non-AWS resources, to manage distributed states in complex systems; CloudSearch – an elastic search engine based on Amazon’s A9 technology to provide auto-scaling and a sophisticated feature set (equivalent to SOLR); CloudFront for a worldwide content delivery network (CDN), to easily distribute content to end users with a single DNS CNAME.
I suppose if I were to re-draw Ryan’s reference architecture, I’d include support (AWS Support) as well some payment/billing services (after all, this doesn’t come for free) and the AWS Marketplace to find and start using software applications on the AWS cloud.
One more point: security and compliance (security and service management are not shown as they are effectively layers that run through all of the components in the architecture) – if you implement this model in the cloud, who is responsible? Well, if you contract with Amazon, they are responsible for the AWS global infrastructure and foundation services (compute, storage, database, networking). Everything on top of that (the customisable parts) are up to the customer to secure. Other providers may take a different approach.