How To Use AI Content

AI writing tools and content generation solutions are objectively impressive. The $25 article you used to buy from UpWork can now be generated in 5 minutes flat for less than 50 cents. 

It's not "blow you away" quality off the bat, but it's significantly better than the work I'm used to getting back from most freelancers (again, often at 100x the price).

Ignoring it outright is stupid, and those who do will regret it down the line.

I've seen 4 basic approaches to AI content:

1. Bulk generate thousands of articles, push them straight to your CMS unedited, publish all at once and see what happens. Probably pout because you're not rich after 3 weeks.

2. Utilize approach 1 but more analytically. Use AI to test out niches and low comp keywords. Take the winners and improve them, often reuploading to a clean site.

3. Use AI as your writer, and take on the role of editor. Fact check all output, inject your own voice into the content, and effectively leverage "AI-assisted" content.

4. AI is scary, I hate it, and I will never use it to do anything at all.

There's money to be made with the first 3 approaches if you know what you're doing.

Strictly from a longevity point of view, I think 3 is the safest option until we see what this landscape looks like a few years down the line. 3 is the approach I tend to go with (other than using 1 and 2 on various test sites just because it's fun), guided by a few rules I've refined over the past half year.

So, here are my personal 5 rules to follow for "better than the rest" AI content:

1. Take time with your prompts

It's not hard to do, but if the output you're getting isn't what you want work with the LLM to fix it. Of course baseline Chat-GPT text is going to be "bland." That's sort of the point. 

If you want the output to read a certain way, be formatted a certain way, or follow a specific structure, you can and should explicitly tell the LLM to adhere to those parameters.

You should do this iteratively: use the same base prompt (like "write me an article about dogs") and continue adding and removing style, tone, and format instructions until you have a prompt that works for your goals. 

One of the best pieces of advice I've seen is to use Chat-GPT to help you craft better prompts. You can ask it point blank to provide a prompt that meets your needs, or you can feed it your own prompt and ask it to improve what you have. 

Neither approach will see one-shot perfect prompts, but again - "iterative" is the key word here. It's worth spending time experimenting and building up a personal library of prompts because once the work is done, you have them forever.

2. Utilize writer personas

GPT works best when you tell it to emulate someone or something vs feeding it 10 lines of requirements

Find a well-known author that suits your needs and tell Chat-GPT that it is that author. You'll notice a change in output quality immediately. 

If you don't feel like scanning the works of famous authors to find one that fits, you can always use the classic prompting like: "You are an award-winning science fiction writer." coupled with advice from my next rule.

3. Describe the essence of what you want

Before you can get an LLM to do what you want, you yourself have to understand what you want.

Plenty of people use style prompts like "write creatively." You have to ask yourself, what exactly does "creatively" mean?

I fully admit that's exactly how I prompted Chat-GPT when I first started using it. Since then, I've gotten much better at describing the essence of what it is I want, instead of just asking for "creativity." 

For example, I've gotten a ton of mileage out of the following prompt when looking for more creative output:

Must-follow voice and style guide: Always use a convincing tone. Write in a way that is both educational and fun. Sentences should be punchy and human. Make use of rhetorical questions and other literary devices to keep readers engaged.

I would argue that all of those things are cornerstones of what makes something "creative":

That may not be the best example in the world, but do you see what I mean? You must understand exactly what it is you're asking for before you'll be able to get what you want.

4. Always fact check

This one gets repeated ad-nauseum, but it's entirely true. LLMs will happily fabricate stats, references, people, places, things, and more if it "makes sense" for them to do so.

If you're at all concerned with quality, or your leveraging AI content on any site that you're not ready to say goodbye to, you must fact check.

5. Heavily edit the Intro and Conclusion

Intros and conclusions are still a weak point for AI content, especially if you want them to follow a "search engine approved" structure. 

Introductions

What I personally want out of an introduction is one or two brief paragraphs that jump right into the content, ask the "question" that led the user to your article in the first place, and then immediately summarize the answer. I don't want boilerplate, I don't want background information, I don't want "In the next section, we'll..." and I certainly don't want "Are you ready to dive in? Let's go!". 

Unfortunately, LLMs seem dead set on including all of those things in their introductions.

The real solution here is to generate your introduction after the rest of the article has been written. That way, you can refer the LLM to the article it just wrote as context, and ask it to do things like "extract key points," "summarize the main takeaways," and "answer the god damn question." Trying to get that sort of output before the article is done doesn't work too well because LLMs aren't great at "thinking" ahead.

I take this approach often when I'm writing articles for myself, and a few of the AI tools out there do generate intros in this way. Most of them don't (including this one) because it's slightly more difficult to do so, and because repassing your entire article (or embeddings of your article depending on how long it is and what model LLM you're working with) via an API is much more "expensive" than just slapping together a BS intro and calling it a day.

The "intro last" approach is significantly easier when working inside Chat-GPT directly - once you have all of the sections generated you can ask it to spit out an appropriate introduction since large chunks of the context are inherently preserved in a chatbot environment. 

This is less of an issue as models with larger token limits become commonplace, but in the meantime make sure you're editing your intro if you're not going with the above approach.

Conclusions

You don't run into as many of the same "lacking context" issues with conclusion sections simply because they do logically get generated after the rest of the article.

Still, I've been thoroughly unimpressed by first-draft conclusions from AI content no matter how fancy I get with the prompting. I find almost all of them fluffy, repetitive, and boilerplate. 

Again, in a good conclusion I'm personally looking for a summary of key points and a succinct "answer to the question" of the article if possible. I haven't been able to reliably generate that sort of output without heavy human editing or iteratively re-prompting the AI to finetune the output.

As a last note, make sure you're giving your intros and conclusions more interesting headings than "introduction" and "conclusion." GPT loves using those boilerplate designations and it's the first thing I change in most of my AI articles.

This tool explicitly tells the API to name the first section "introduction" and the last "conclusion" because it's easier to just change them yourself.

Conclusion (see how boring that heading is)

I first thought this would be an actual "how to use this tool" page, but since the entire thing is self-explanatory it turned into a personal account of how I like leveraging AI content.

Maybe you've gotten something useful out of this?