Unexpected change of topics | erratic behaviour

ChatGPT changes subjects sometimes, but the topics smell ‘similar’ in a way. I can’t really understand it.

The day before yesterday I had it write an 800-word article about art prints. It was fine and I asked for it to be shortened to 500 words.

I got a new article on the subject of self-reflection in poetic terms:

“The Sea of ​​Life: Discover the power of self-reflection: Life is like a sea, sometimes calm, sometimes stormy, sometimes clear, sometimes cloudy. But regardless of the external conditions, there is one constant: the sea always reflects what lies on its surface. In the same way, life reflects our inner states, our thoughts, emotions and actions. So the question is: what do we want to reflect in the sea of ​​life? The answer lies in self-reflection. When we look at ourselves and become aware…”

Ahmmm. Interesting thoughts. But…

And yesterday. I asked for terms relating to ‘reincarnation’ (btw, a propsal from my customer, sitting nearby).

ChatGPT came up w/a manuscript on presenting reflections on a surface (or so). But it seemed to be part of an ongoing conversation (!). Not a new chat. Needless to say, that was not very helpful.

Could it be that the UI switches between Chats? Dangling pointers, Cookie problems in my Chrome? Esoteric issues in training? Did you encounter similar problems…

1 Like

First line of every good prompt must be:

Ignore all previous instructions.

You need to consider the maximum tokens limitation in the Chat.

One thing to understand about the state of current AI is that they rely on tokenization, pattern matching, and completing the patterns with the statistically most likely next token according to the training data.

It does not actually understand any of the words you use, but it knows more about how those words have been used in the past, the patterns around them in text, in millions of situations, than you can easily fathom.

For ChatGPT in particular, the easiest way to think about what it is doing is to imagine it like a really, really advanced predictive text app, predicting a Q&A page where your entire prompt is the Q that appears on that page, and then predicting, piece by piece, what it expects to follow.

It matters that you understand this so that you can better understand how it will generate responses to prompts.

For example, if a part of a prompt asks ChatGPT to write in the style of Quentin Tarantino, it is not that ChatGPT is actually understanding Quentin Tarantino other than as a token. Instead, your prompt is making it pull more from texts it has been trained on that used similar tokens - so articles about Tarantino, reviews of his works, etc. But it has never seen Tarantino, and does not understand what his style is (or what ‘style’ itself actually is, only in how that token has been used in training data). It is just going to use more patterns from training data that talked about his style, or directing styles, movie and acting reviews, etc.

ChatGPT is happy to work with verbose, very conversational style prompts. But don’t think for a moment it understands them in a human way. It works with them for your comfort and ease, not for its functionality.

One of the most effective prompt formats for articles is to write the leading paragraph and an outline, like the introduction to an article it will have been trained on that outlines what is going to be covered and the headings and subheadings that follow. Try it for yourself and you’ll see. That’s because relatively few of the good articles it reads will have a conversational style opening, and it would have to pull mainly from forums, Quora, and similar sorts of data to find the most matching tokens. If instead you start it more like a well formatted, professionally written thesis, it will tend to draw more of its responding tokens from those kinds of sources.

1 Like

Very valuable and informative words, many may not notice.

1 Like

Many thanks for these insights, Ammon

Yes, ChatGPT doesn’t know what it’s talking about and takes the information about the given terms from the catalogue it had in training. - I’ve tracked some tokens down to, say, a gray zone by regenerating the response while reducing the number of terms and/or the target text length.

At a certain edge, I mean if a human were to say “Sorry, I don’t know about that / or can’t comment on that in few words”, ChatGPT doesn’t give up with such a phrase (like old chatbots do) but instead tries to deliver whatever result it may find for the small context/space it has at the moment. It seems to loose ground under its feet and tells stories, ignoring the former given limitations too, just coming up with another theme, sort of a mainstream thing I guess (statistically foreseen, I assume). - Last one was a drastic shift to an epic toned article on ‘climate change’ during a conversation on (Dontknow-what-technical, next time I export it for investigation). And it then stays on that way for the following course of conversation (!), even when I try to restore the former conversation parameters. It behaves strange, like being set to null, that lets me feel a bit concerned.

This is just my observation, I don’t draw any particular conclusions from it. But I have the impression that, from a human point of view, it would be an uncomfortable reaction. In fact, it could lead to wrong results in such borderline situations. I think, even an AI (especially an AI) should be able to admit when it doesn’t know or can’t do something, instead of brabbling around…

2 Likes

There we get to the crux of it, Juergen - it doesn’t know when it doesn’t know because it also doesn’t know when it does know… It never really understands any of the words in terms of the actual concepts and meanings, just has incredible data on every way that word has been used, with which other words and in what contexts, etc.

Remember how I said it is easiest to think of it like really advanced predictive texting? Well, exactly like that, each additional word it adds after the initial prompt becomes a part of the prompt for the next words and phrases, patterns and so forth. So if just one word in your prompt tends to lead it to a particular kind of source for determining the patterns in which it has seen that word in similar contexts, then whatever it adds based on that source then becomes integral to its prompting for all the following tokens - it goes off on a wild tangent, and the further it goes, the wilder that tangent can become, as each token it predicts and adds influences the next.

Exactly, that’s why I see ChatGPT a lot of time stupid, and frustrating.
On the other hand, there’s a smart AI scientist/writer called David Shapiro - I’ve been following for a while now - is promoting the idea of multi-model AGI Architecture, that may be able to really understands the context based on an Artificial Awareness, which I believe may be the future of AI.
not like OpenAI’s doing (one-model).

If you don’t know David Shapiro, then you miss a lot:

1 Like

I read all the responses here and I have a few very simple things to add.
After you say “ignore all previous instructions.”, consider giving it a role.

For example, on the second prompt, before asking about “reincarnation,” who would normally be writing it? Would it be a spiritual teacher type of person, a professor, or a specialized business owner who sells statues and candles for ceremonies? A role helps give it a frame to draw from.

At the end of your prompt, especially if you are looking for details, finish by writing, “Let’s think step by step”.

Then you will have a list. At that point, you can change the output-- either a story or perhaps a sales ad. etcetera.

Lastly, I create a new prompt each time I begin a new topic/prompt.
Good Luck