Weeknote 2024/05: using ChatGPT to overcome writer’s block; and why do UK supermarkets use technology so badly?

This week’s weeknote is shorter. Just a few nuggets… but I did actually write some real, standalone blog posts:

I hope you enjoy them. There was another one I planned about anti-social media. I thought I had it in note form but I can’t find the notes now. Maybe that will follow soon. But there’s also a possibility it will go to the great list of unwritten or part-written blog posts…

Some artificial assistance from ChatGPT

For the last few weeks, I’ve been trying to write some data sheets to describe some of the Node4 services that I’m responsible for. I’ve really struggled – not so much to understand what the service entails – but to generate lists of features and benefits.

One of my colleagues is a big fan of ChatGPT. I’m not – because I value quality writing – and I’m concerned it just churns out very formulaic text filled with buzzwords. (Sadly, in this case that might be exactly what I need!). In addition, I’ve probably mentioned previously that my wife is a copywriter so I am a little biased. Even so, ChatGPT 4’s content has at least allowed me to move past my writer’s block – it gave me a draft that I could refine.

Retail pricing inefficiencies

I started my career working in supermarkets (Bejam/Iceland, and then Safeway). It was the time when we saw the end of individual price ticketing and the start of barcode scanning. Back in those days (the late 1980s), it was someone’s job to make sure that the shelf edge tickets matched the store computer.

I’ve just got back from a trip to a major UK supermarket. I’m not going to name the chain, because I’ve had similar issues in others, but it was interesting to see, yet again, an advertised offer that didn’t match the scanned price. And the store’s reaction is almost always to remove the shelf edge ticket (not to correct the computer).

But we have technology that can keep these things aligned. e-ink displays are used on shelf edges in some other countries – it mystifies me that we don’t use them in the UK.

Retailers will argue that they work on small margins that that investment in systems is secondary to reducing prices. Except that right now they are doing it badly – and inefficiently too!

Not only would the use of e-ink displays allow a guaranteed match between the shelf edge and the point of sale systems, but they would remove an admin task of replacing tickets (something which is clearly not done well). They could also allow for demand-based pricing, though I’m less keen on that idea…

Plus “random” checks for self-scanning

Then, to add insult to injury, the store systems selected me for a “random” check. For a dozen items, totalling £12.69. And it seems to happen quite frequently (hence the quotes around the word random). Not long ago they were encouraging us to use an app and self-scan. Now they seem to be seeing self-scanners as potential criminals. Either innovate, use the technology, and take action when someone is abusing the system, or pay for more staff to run checkouts. The choice is yours Tesco, Sainsbury’s, Co-op, et al. But stop treating the customers that help you reduce your costs as potential shoplifters.

More “coffees”

Last week’s weeknote featured the concept of “coffees”, as meeting people without an agenda, to catch-up and to learn. No sooner had I hit publish, then I met up with another old colleague and friend, David Saxon (@DMSaxon). David and I worked together for many years and he’s now at Microsoft so we still have a lot in common. He was staying near me last weekend, so it was a great opportunity for dinner and a chat.

I didn’t line anything up during the work week but as we roll into a new month there will be another pairing in the WB-40 podcast coffee club, plus I’ve got a couple of former team members that I really must check in with. And, in a few weeks, I’m due to catch up with my former colleague then manager, and long-time mentor, Mark Locke.

Things that caught my eye this week

At home

I’m at the stage of life where frequently at least one of my sons is away from home. Last weekend my wife was too – so there was just me, my youngest son, and the dog. Since Sunday evening, we’ve been a complete family again – which has been good. Matt’s back from two weeks skiing (which he referred to as altitude training) and is quite pleased (and surprised) to have been taking Strava segments on skis (he’s used to it on his bike). I need to make the most of it though before he goes back to Greece for a training camp. He’s racing next weekend, so I have one more trip away to support him before he disappears for a couple of months.

I’ve also booked the exam for my RSGB Foundation Licence (as promised at the last Milton Keynes Geek Night), so I have some revision to do.

Finally, I’m giving myself a gold star, because today, I restrained my “inner chimp”. I received a text message from my son’s school, advising me that he will soon be held back for a detention. That’s fine. He needs to learn. But it niggled me that the message contained a glaring grammatical error. This is a school which is very proud of its history and standards for students but doesn’t always follow through with its own communications. The pedantic side of me was desperate to reply and point out the mistake but I managed to restrain myself!

That’s all for now

No tech projects, no new TV, no podcasts of note, no photos. I’ll be back next week with another weeknote – and hopefully soon I’ll be able to shout about a cool new service I’ve been working on for Node4.

Featured image created using the Clippy Meme Generator at Imgflip.

This is what happened when I asked ChatGPT to write a blog post…

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In recent weeks, I’ve been watching the buzz of activity about large language models and generative AI. So I decided to see what would happen if I asked Open AI’s ChatGPT to write a blog post for me. I gave it the following prompt:

Write a blog post of up to 800 words that discusses the use of generative AI tools like ChatGPT to automate human activities Use writing a blog post as an example. Include some commentary about the potential social impacts of using AI to generate content. Also discuss whether there are any concerns that need to be considered - such as whether AI-generated content is accurate.

“Generative AI tools like ChatGPT have been making waves in the field of automation and artificial intelligence. These tools have the ability to generate human-like text, helping to automate certain human activities such as writing, customer service, and more. While the potential benefits of these tools are clear, there are also concerns about their impact on society and the accuracy of the content they produce.

One of the most significant benefits of generative AI tools like ChatGPT is their ability to automate repetitive or time-consuming tasks. For example, a blogger who regularly posts new content may use ChatGPT to generate article outlines, saving time and effort. Customer service teams can use generative AI tools to provide quick and accurate responses to frequently asked questions, improving the overall customer experience.

Another benefit of generative AI tools is their ability to create content in multiple languages, allowing businesses to expand their reach into new markets. These tools can also be used to generate personalized content for individuals, such as email newsletters or social media posts, which can improve engagement and lead to higher conversions.

However, the use of generative AI tools also raises concerns about the potential social impacts of using AI to generate content. One concern is that these tools could lead to a loss of jobs, particularly in industries where writing and content creation are essential. While automation has always been a part of technological progress, there is a risk that AI could replace more jobs than it creates.

Another concern is that AI-generated content could be used to spread misinformation and propaganda. With the ability to create convincing, human-like text, there is a risk that malicious actors could use these tools to create fake news stories or manipulate public opinion. This could have serious consequences for democracy and social stability.

Accuracy is another concern when it comes to generative AI tools. While these tools are designed to create human-like text, they may not always produce accurate or reliable information. This could be particularly problematic in fields like journalism or science, where accuracy is essential. As AI tools become more sophisticated, it will be important to ensure that the content they produce is factually correct and trustworthy.

Despite these concerns, generative AI tools like ChatGPT have the potential to revolutionize the way we create and consume content. As with any new technology, it is important to approach these tools with caution and to carefully consider their potential impact on society. However, if used responsibly, these tools could improve efficiency, accuracy, and creativity in a variety of industries.

In the case of writing blog posts, for example, using ChatGPT to generate outlines or ideas could be a useful starting point, but it is important to remember that these tools are not a substitute for human creativity and expertise. While AI-generated content may be faster and more efficient, it may lack the nuance and personality that comes from a human writer.

To ensure that AI-generated content is accurate and reliable, it is important to use reliable sources of information and to fact-check any content that is produced. This is particularly important when it comes to sensitive or controversial topics, where misinformation can have serious consequences.

Overall, the use of generative AI tools like ChatGPT to automate human activities is a promising development that has the potential to improve efficiency and creativity in a variety of fields. However, it is important to approach these tools with caution and to carefully consider their potential impact on society. With responsible use and careful oversight, generative AI tools could become an important part of our technological landscape, helping us to achieve more than we ever thought possible.”

Is it good enough?

As it happens, the resulting text is quite believable, but it’s not actually a very good post. A different AI, the one from Yoast SEO that helps with the readability of my content, highlighted that there aren’t enough subheadings. Also, more than half the sentences are a bit long (more than 20 words). And it’s using too much passive voice (I do that too, so I guess it could be considered authentic!).

Garbage In Garbage Out (GIBO)

I tried asking ChatCPT to improve its writing, based on the points above but I didn’t get far:

Re-write the above with shorter sentences to make it easier to read, and use sub-headings.

And then:

Try again, and this time use try to use active voice instead of passive voice... and make the subheadings a bit snappier!

It made some changes, but really the way I see tools like this working is as a very quick first draft for review and update by a human writer. We just have to be careful not to let those who don’t understand or who don’t value writing say “just get an AI to write it in a few seconds”.

Featured image by Alexandra_Koch from Pixabay.