Prompts to mimic marketeers: how do I get the best possible prompt?

I want to have a few prompts that can kickstart marketing tasks, e.g. “Max’s LinkedIn post” or “UX case-study”.

Problem: when I say “write me a linkedin post about X” I get output that is all over the place and recognizably AI generated

Solution (i.e. the idea): make such a prompt that you get output that is ‘good’

Now, when I look at all prompts I generally see very simple prompts, of which I dont understand why they are so popular or upvoted.

It’s generic and has minimal context.

My questions would be

  1. how should we think about providing context and obtaining results like i described above?
  2. how could I go about achieving this

any tips?

2 Likes

First, assume nothing.
One of the biggest issues I see with many prompts is that they assume way too much, and think that the AI will magically read their minds. In that most people are a lot less original and unique thinking than they believe, it can often work to a point, where the most generic answer to the most generic prompt does okay for them.

“Imagine you are an astronaut” or “Act as a Senior Marketing Executive” are assumptive prompts. They assume that an AI can actually understand those roles in the same way as you will. Again, it will work to a certain point, but the AI has never ever had a job, or gotten a wage, or returned from work tired, or chatted with colleagues at the water cooler. It is simply trained on millions of documents to recognize patterns in language, and to predict the pattern of words most likely to correspond to your prompt. It can’t actually ‘imagine’ and it has no ability for ‘acting’, and simply uses the words in its pattern recognition and prediction.

Instead of asking the AI to imagine and act, try yourself to act in the role and position of the senior editor of a major global publication, writing an instruction for a human employee overseas. Imagine that the worker is a talented writer, but comes from a completely different cultural background and won’t have done or experienced most of the things you take for granted. So, you take time to explain what needs to be explained, and to be more explicit and clear about exactly what you want.

So many, many prompts focus on telling the AI who to pretend to be, and not one single word on the far, far more important job of telling the AI what audience to write for. Think about tone of voice in the writing. Think about reading levels. Think about the audience intention - what they expect in reading the piece. Do they need a broad overview, or a deep-dive, to fulfil the need in their minds? If they may need both, which order should those things be in, and have you told the AI that?

Take the time to look up a few specific facts, anecdotes, citations or quotations you’d like the AI writer to include, and tell it so. Include them in your prompt. Additionally, including an example of the kind of output you want - such as a previous great article - massively improves the success and quality of output.

As a final note for now, understand that AI is built, by design, to be rather predictable and generic. That’s how it works. It predicts and generates a response to each prompt based entirely on its massive training data. Any true originality and creativity has to come from you, the operator, either in your prompts, or in the edit afterward. Do take the time to read A Crash Course in LLM-based AI to have a clearer understanding of what LLM-based AI does, and cannot do, so you can think about the workarounds and which parts of the task you’ll need to ‘inject’.

1 Like

Fair point. But, still, in that case I have the same questions:

  1. how should we think about providing context and obtaining results like i described above?
  2. how could I go about achieving this

For example,

here you can see that chatGPT is unable to solve the most simple math equation. BUT, if you provide the context of it being a world class math teacher, then it’s easy.

im looking for these types of insights that I can utilize in creating proper and sound prompts for my business

1 Like

The AI is a predictive text generator based on the prompt it is given. This is a fact. Every single word added or removed in a prompt can alter the output because it changed the parameters, but very often those changes in output are not due to the reasons they’d change things for humans, just that they are changing the prediction pattern.

For example, most people find that longer prompts work much better for them than shorter prompts, even when they repeat themselves in the prompt (meaning no real new context in the words or info, only that repetition itself is now part of the pattern).

You might well find that “Give the answer to what is 100,000 x 1,000,000” gives a completely different result to “what is 100000 x 1000000” without any reference to math teachers or roles, simply because it is a better phrased question.

After all, a correct answer to “what is 100000*1000000” is: “This is a mathematical expression of a sum. The correct answer will depend on what base is used. The Binary system can use those exact same digits as Base 10 would, but have a massively different meaning”.

That’s a factually correct and more complete answer, but not the predictable one, right? Thankfully, the AI cannot actually understand being a math teacher, and so predicts the absolute most common likely answer from all of its texts, thus assuming base 10, where a real mathematician might not.

Asking what a problem is can have the answer that it is a problem, rather than giving you the solution to it. Asking to give the answer/solution to a problem is a clearer, better way of asking.

Just for fun, try experimenting. Would you still get a correct answer to the math is you gave it other roles, like (and I’m making this up right now, untested on GPT):

You are the worlds greatest peanut peeler. Give the answer to 1000000*100000

You are the window cleaner for a 100 storey tower building. Give the answer to 1000000*100000

1 Like

I checked it for you, and yes, ChatGPT got it correct. https://chat.openai.com/share/51e6ae1d-340d-4971-b850-7a2fb1910164

Also, just for fun…


:joy: :rofl: :joy:

1 Like

my god that’s hilarious :joy: I very much appreciate this insight! Thanks Ammon. Seems like it’s a prerequisite to understand how these models actually work.

The title of the chat is just funny :rofl:

image

Yup. This is exactly why I lead my first response with “assume nothing”. When something does work, it may not be for the reasons one would first assume.

The next gen of AIs will be a big step forward, and the ones after that, especially as they rely less on LLM alone and start to combine Knowledge Bases and other forms of AI for logic and reasoning. The current generation of AI are a lot less actually ‘intelligent’ than they appear, by a long, long way.

Plus experimenting is fun, with more than a few good laughs along the way. :slight_smile:

1 Like

and the Danger begins…

I saw a video where Data Scientists were not able to figure out what happened in some AI model development process. :new_moon_with_face:

it’s even called ‘hidden layer’ in neural networks, which is something scary :smile:

Oh that happens with ANY sort of unsupervised machine learning really, even the ones not intended to become AI level. When a machine is left to spot patterns by itself, it may spot latent and unseen patterns that humans never would. Sometimes that may be a genuine previously unknown correlation… Other times it can be a complete error, like seeing shapes in clouds, or seeing faces on trees and mountains, etc. Very often they can’t unpick the pattern as other patterns get built upon that spurious correlation, and so, have to throw the whole run away and start again from scratch.

1 Like