AI writers – should you use them?

Artificial Intelligence (AI) is appearing in almost every tool I use as a marketer these days, and it’s got me thinking about the future of AI writers.

You may have tried using ChatGPT or Bing Chat for yourself, and if you’ve used them a lot, you’ll know they have their strengths and weaknesses.

What can AI writers be used for?

Personally, I think AI writers can provide you with a good starting point or give you new ideas for content, but they absolutely should not be relied upon for copywriting or fact checking.

If you need to summarise a long document or reems of text in just a few words, an AI writer can help you to do that effectively. If you want to develop a blog outline to give you some ideas of topics to explore within a subject, go for it, it will save you some time! I’ve even had success using AI writers to come up with some content for social media, providing I have given detailed prompts.

What are the limitations of AI writers?

Ultimately, an AI writer is a tool which uses predictions to create an output, but the output is only as good as the source material. This source material could be information you have provided, or it could be taken from any number of sources across the internet.

From my experience with ChatGPT, the language output is American English, and at times it can feel like you are talking to an over-excited millennial YouTube influencer, when that might not be the tone most appropriate for your subject matter.

I’m unsure if AI writers are able to determine what source material is a well-researched academic paper from a credible author and what is just someone’s blog where they write anything that comes to mind.

What will happen when there are so many articles and sources of information which have been written only with AI, that AI writers will only have potentially inaccurate, poorly researched sources to draw from? Will it just degrade over time?

I asked Bing Chat, and this is what it said…

“The question you raise is an interesting one. While it is true that AI-generated content is becoming more prevalent, it is important to note that the vast majority of information available online is still created by humans.”

It then seemed to utter some words of caution… “In fact, researchers at Epoch AI predict that programs such as ChatGPT will run out of high-quality reading material by 2027.”

It will be VERY interesting to see how the output of AI writers changes in four years’ time!

AI writer cautions people to check sources

Bing Chat then went on to say that people should remain vigilant and critical when consuming information online, and that they should always check the sources of information. When prompted further, it said that people should look for grammatical errors, awkward phrasing, and repetitive words or phrases to identify articles written with AI writers.

Hilariously, it then added: “Another way to check if an article has been generated using GPT-3 is to look at the author’s bio. If the author is an AI language model such as GPT-3, it will likely be mentioned in their bio.” That settles it then.

Generally, a good rule of thumb for checking information online (and offline) would be to ensure you are cross referencing what you read or hear with various sources of information.

Bing Chat warns of potential biases of AI-generated content

To its credit, another thing Bing Chat tried to warn of was the “potential biases and limitations of AI-generated content”.

I’ll give an example of this: I once asked ChatGPT to make a paragraph ‘sassy’ in tone, just to see what it would do. It then proceeded to insert the word ‘slay’ into the copy in a way which didn’t mimic a normal speech pattern and would no doubt receive a few eye rolls from the LGBTIQA+ community.

To give another example of AI bias, I asked a text-to-image generator to give me an image of a businesswoman – it immediately created several images of busty, skinny, white, overly sexualised depictions of women, all wearing either low cut tops or skintight attire. There was an option to add in ‘negative prompts’ to exclude such things, but why on earth should I have to? I guess it was my fault for asking something which draws from images on the internet to show me a woman.

My fear is that the more AI articles are published by people not checking their facts, the more inaccurate sources of information will be created. If people don’t re-write the content, then we’ll slowly lose the humanity which makes writing and storytelling so powerful in the first place.

Email signature portrait Kirsty Nelms

Kirsty Nelms

Like it? Share it
You might want to read

Writing people-first content for the user and SEO

In the age of AI writers, it’s encouraging to see Google prioritising people-first content. When writing anything to be published online, we all know that SEO (Search Engine Optimisation) should be a key consideration, but people-first content should lead the way. Find out why in this article by Cameron Hubbard.

Read Article »
Peacock Digital Marketing
An outline of Peacock Digital Marketing's peacock feather custom icon.

Subscribe to the newsletter for monthly marketing tips

You can unsubscribe at any time. Privacy Policy.

The Business Owner's Q1 Marketing Planner shown on an iPad with someone holding it.

Get The Business Owner’s Q1 Marketing Planner for free when you sign up to our newsletter!

Not sure what to write for your social media posts? This guide from Peacock Digital Marketing will have you feathering your nest with all the key dates, occasions and added insights needed to get your marketing off to the right start in 2024. 

Our monthly newsletter contains free marketing tips and you can unsubscribe at any time. Privacy Policy.