Way back in the autumn of 2012, I was getting all excited about my Raspberry Pi. I even hacked around to get it working over Wi-Fi but never got around to publishing the post! So, a year and a bit later, here are a few notes based on some links I recorded at the time. Your mileage may vary (the Raspberry Pi has come a long way since then and I was running Debian Squeeze rather than Raspbian) but if you’re having difficulties getting RasPi Wi-Fi to work, hopefully some of this will help.
I set up local accounts for the kids, with parental controls (if you don’t use Windows Family Safety, then I recommend you do! No need for meddling government firewalls at ISP level – all of the major operating systems have parental controls built in – we just need to be taught to use them…), then I decided that my wife also needed a “Microsoft account” so she could be registered as a parent to view the reports and over-ride settings as required.
Because my wife has an Office 365 mailbox, I thought she had a “Microsoft account” and I tried to use her Office 365 credentials. Nope… authentication error. It was only some time later (after quite a bit of frustration) that I realised that the “Organization account” used to access a Microsoft service like Office 365 is not the same as a “Microsoft account”. Mine had only worked because I have two accounts with the same username and password (naughty…) but they are actually two entirely separate identities. As far as I can make out, “organization accounts” use the Windows Azure Active Directory service whilst “Microsoft accounts” have their heritage in Microsoft Passport/Windows Live ID.
Tweeting my frustrations I heard back from a number of online contacts – including journalists and MVPs – and it seems to be widely accepted that Microsoft’s online authentication is a mess.
As Jamie Thomson (@JamieT) commented to Alex Simons (@Alex_A_Simons – the Programme Director for Windows Azure Active Directory), if only every “organization account” could have a corresponding “Microsoft account” auto-provisioned, life would be a lot, lot simpler.
At least once a month, I travel to Manchester for work. I tend to use the train, rather than drive because: it’s pretty straightforward; I can work on the journey; and I’m not so tired at the other end (although having the car with me can be more flexible at times).
Today is one of those days when I’m heading north but this time, instead of a straight out and back from Milton Keynes Central to Manchester Piccadilly, I need to be in Crewe tomorrow. That meant buying three single tickets – and even though my train from Manchester to Milton Keynes sometimes goes via Crewe, it cost more to break the journey than to go direct. That’s just one of the many vagaries of the British railway ticket system (and contrary to a popular money-saving tip)… go figure!
Anyway, the reason for this diatribe is that the Virgin Trains website defaulted to letting me collect my tickets from the “Fast Ticket” machine (a complete misnomer when it involves looking up and entering an 8 digit alphanumeric reference on a not-very-responsive touch screen using a non-QWERTY keyboard) at the origin of my last journey (i.e. Crewe) rather than my first (i.e. Milton Keynes Central).
In horror, after spending £150 on train tickets, I thought I would have to *drive* to Crewe to collect them! In a state of panic I called Virgin Trains (calls cost 4.5p a minute from a BT land line – on other networks you may need a small mortgage), who told me it doesn’t actually matter which station I collect the tickets from, as long as I have my payment card with me. Bizarre! So why ask me which station I want to collect from then?! (Maybe blame the Trainline.com back-end – or perhaps the rail ticketing systems…)
I didn’t trust the advice and didn’t want to be caught out whilst trying to catch the something-way-too-early train to Manchester this morning, so I headed to my local station to collect my tickets on Friday evening, just in case I needed to get someone at Virgin Trains to help me out. Actually, I drove over twice because I forgot my credit card on the first occasion and left it next to my laptop on my desk, from where I’d bought the tickets (idiot)!
Anyway, the verdict is that it really doesn’t seem to matter which station you select to collect your tickets at – you can collect them in any Fast Ticket machine at any station (as long as you have the card used to purchase them). Something that might be worth knowing about if you ever find yourself panicking as a result of some poor UX design on a website…
I can’t believe that the quarterly Milton Keynes Geek Night is nearly upon us again. I usually try to blog about the evening but I’ve failed spectacularly on recent attempts. I might fail again with this week’s MKGN – not because I’m slow to get a blog post up but because the tickets “sold” out in something crazy like 2 minutes…
September’s Geek Night was up to the usual high standard (including the return of David Hughes – seems you can’t escape that easily!) but included one talk in particular that stood out above all of the others, when Ben Foxall (@BenjaminBenBen) showed us (literally) the other side of responsiveness… but we’ll come back to that in a moment.
Back to front performance
First up was Drew McLellan (@DrewM)’s take on “back to front” performance. You can catch the whole talk on Soundcloud but for me, as someone who runs a fairly shoddy WordPress site, it got me thinking about how performance is not just about optimising the user experience but also about the back end – perhaps summed up in one of the first points that Drew made:
“Website performance is about how your site feels.”
That may be obvious but how many times have you heard people taking about optimisation of one part of a site in isolation, without considering the whole picture. As Drew highlighted, performance is a feature to build in – not a problem to fix – and it’s also factored into search engine algorithms.
Whilst many performance gains can be found by optimising the “front-end” (i.e. Browser-side), there are some “back-end” changes that should be considered – sites need to be super-fast under normal load in order to be responsive under heavy load (quite simply, simultaneous requests affect responsiveness – they use memory and the quicker you can process pages and release memory, the better!).
First up, consider hosting. Drew’s advice was:
Cheap hosting is expensive (shared hosting is cheap for a reason).
Shared hosting is the worst (rarely fast) – think about a virtualised or dedicated server solution instead. Constrain by CPU, then RAM, not disk space (that should be a red flag – it’s cheap, if not much is allocated it shows lots of people crammed on a server).
Consider what your project has cost to build when buying hosting! Use the best you can afford – and if they advertise with scantily clad ladies, they’re probably not very good (or to be encouraged)
Next, the content management system (CMS), where Drew says:
Think about the cost of external resources (going to database or web API, for example). Often these are necessary costs but can be reduced with careful architecture.
Employ DRY coding (don’t repeat yourself) – make sure everything only has a single representation in code. Do things once, cache and reuse (unless you expect different results). For example, if something doesn’t change often (e.g. post count by category on a blog), don’t calculate this on every page serve – instead consider calculating when adding/removing a post or category (called denormalisation in database terms)… be smart – consider how real-time is the data? And are people making decisions using this data?
Do the work once – “premature optimization is the root of all evil” is actually a quote from 1974, when line-by-line optimisation was necessary. Focus on the bottlenecks: “premature” should not be confused with “early” – if you know something will be a bottleneck, optimisation is not premature, it’s sensible.
Some frameworks focus on convention over configuration (code works things out, reduces developer decisions) – can lead to non-DRY code – so let’s make programming fun and allow the developer to work out the best way instead of burning CPU cycles. “Insanity is doing the same thing over and over again and expecting different results”.
The Varnish caching HTTP reverse proxy may be something to consider to speed up web site (unfortunately Drew ran out of time to tell us more – and my hosting provided found it caused problems for some other customers, so had to remove it after giving it a try for me)
Great talk at #MKGN from @drewm on performance. Front-end page-speed is nothing without good hosting and an efficient back-end.
The first of the five-minute talks was from Christian Senior (@senoir – note the spelling of the Twitter handle, it’s senoir not senior!). Christian spoke about managing client expectations. Whilst my notes from Christian’s talk are pretty brief (it was only 5 minutes after all) it certainly struck a chord, even with an infrastructure guy like me.
Often, the difficult part is getting a client to understand what they are getting for their money (“after all, how hard can it really be?”, they ask!) – but key to that is understanding the customer’s requirements and making sure that’s what your service delivers. Right from the first encounter, find out about the customer (not just who they are, what want, how much money they will spend – but browsers, devices available, etc.) and try to include that detail in a brief – the small things count too and can be deliverables (incidentally, it can be just as important to distinguish the non-deliverables as the deliverables). Most of all, don’t take things for granted. My favourite point of the talk though, was “talk to customers in a language they understand!”:
Or, to put it another way:
“Work in code, not talk in code!”
Great 5 min talk from @senoir about managing clients expectations, something I feel is important to a successful project #MKGN
As I mentioned in my introduction, Ben Foxall (@BenjaminBenBen)’s five minute talk on “the other side” of responsive design was nothing short of stunning. If I ever manage to deliver a presentation that’s half as innovative as this, I’ll be a happy man. Unfortunately, I’m not sure I can do it justice in words but, as we know from Sarah Parmenter (@Sazzy)’s talk at MK Geek Night 5, responsive websites provide the same content, constructed in different ways to serve to multiple devices appropriately.
Ben got us all to go to a site, which reacted according to our devices.
He then showed how the site responded differently on a phone or a PC – choose a file from a PC, or take a photo on a phone.
He tweeted that photo.
He showed us the device capabilities (i.e. the available APIs).
He updated his “slides” (in HTML5, of course), interactively.
And projected those slides in our browsers (via the link we all blindly clicked).
In summary, Ben wrapped up by saying that “responsiveness and the web needs to use the capabilities of all the devices and push the boundaries to do interesting things”. If only more “responsive” designers pushed those boundaries…
Following Ben’s talk was always going to be a tough gig. I’m not sure that I really grokked Tom Underhill (@imeatingworms)’s “Work in Progress” although the gist seemed to be that technology gallops on and that we’re in a state of constant evolution with new tools, programs, apps, books, articles, courses, posts, people to follow (or not follow), etc., etc.
Whilst the fundamentals of human behaviour haven’t changed, what’s going on around us have – now we need more than just food and warmth – we “need” desktops, laptops, smartphones, pink smartphones, smart watches. Who knows what’s in the future in a world of continued change…
Constant change is guaranteed – in technology, social context and more. Tech is a great enabler, it could be seen as essential – but should never replace the message. Brands, experiences and products change lives based on the fundamentals of need.
Not sure I understood much of @imeatingworms's "hierarchy of needs" but I'm sure others who are more intelligent than I did! #MKGN
The one minute talks were the usual mixed bag of shout-outs for jobs at various local agencies (anyone want to employ an ex-infrastructure architect who manages a team and really would like to do something exciting again… maybe something “webby”?), Code Club, the first meeting of Leamington Geeks, and upcoming conferences.
Fear, uncertainty and doubt
The final keynote was from Paul Robert Lloyd (@paulrobertlloyd), speaking on FUD – fear, uncertainty and doubt. Paul makes the point that these are all real human emotions – and asks what the consequences of abusing them are. He suggests that the web has been hijacked by commercial interests – not only monitoring behaviour but manipulating it too.
Some of the highlights from Paul’s talk make quite a reading list (one that I have in Pocket and will hopefully get around to one day):
As the web is largely unregulated, it’s attractive to those who want to increase their personal wealth; so we have to be optimistic that there are enough people working in the tech sector with a moral compass. Arguably, the Snowden leaks show that some people have integrity and courage. But Paul is uncertain that Silicon Valley is healthy – “normal” people don’t see customers as data points against which to test designs – for example a team at Google couldn’t decide on shade of blue so they tested 41 shades (and border widths). Paul also made the point that the team was working under Marissa Mayer – for a more recent example witness the Yahoo! logo changes…
Then there are the “evil” social networks where, as Charles Stross highlights, “Klout operates under American privacy law, or rather, the lack of it”.
Paul says that The Valley operates in a bubble – and that Americans (or at least startups) skew to the workaholic side of things, viewing weekends off as a privilege not a right. He also suggests that the problem is partly a lack of diversity – The Valley is basically a bunch of Stanford guys making things to fix their own problems. Very few start from a social problem and work backwards – so very few are enhancing society; they’re making widgets or enhancing what already exists. Funding can be an issue but governments are seeing the tech sector as an area of rapid growth and it’s probably good not to be aligned to a sector where you can launch start-ups without a business case!
Lanyrd shows that it is possible to start up outside The Valley (although they have been bought by Eventbrite so have to move) [TweetDeck is another example, although bought by Twitter] but Silicon Valley arrived by a series of happy accidents and good luck/fortune – it’s important that the new tech hubs shouldn’t be a facsimile of this.
Then there’s protecting out data from Governments. Although conducted before the Snowden leaks the Electronic Frontier Foundation’s annual survey asks “who has your back?” – and, although it’s still young, it seems companies are starting to take notice.
Choose your services wisely – we (the geeks) are early adopters – and we can stop using social networks too. It’s easier to change services if data can be exported – but all too often that’s not the case so you need to own your own content.