Issue with several prompts designed to do rewrites for human and reduced plagiarism not working

Issue with several prompts designed to do rewrites for human and reduced plagiarism. The rewrite starts okay, but then gets totally off track and begins writing a fiction story line. This has occurred with several different prompts and several different articles.

This is part of why these current 'AI’s are just AIs and not AGI - they simulate intelligence, they are not generalized and thus genuine ‘intelligence’ by any stretch of the imagination.

They are predictive texting on atomic steroids, and generate their responses from your prompt rather as if they were ‘imagining’ a Q&A document where your prompt was the opening Q, and then they predict and generate what they expect to follow, piece by piece.

The really impressive part of their ‘intelligence’ is in just how well they can take your prompt, attempt to understand it through NLP (Natural Language Processing), and then generate text from there. They know more about the patterns of language, about how words co-occur and corelate than the vast majority of humans do… But that doesn’t mean they actually understand the words. At least not in the way we do.

ChatGPT knows a thousand ways that the word ‘cool’ has been used, in hundreds of contexts, and knows that sometimes it occurs in the context of temperature, sometimes in context of fashion, sometimes in context of emotional distance, and much, much more. But it has never experienced the sensation of cool. It doesn’t know what a cool breeze feels like, of how being told you’re a cool friend feels, and so that part of its context, the experiential part, is completely missing.

It’s one of the amusing parts about how others prompt. They often ask ChatGPT to pretend to be someone or something, but it has no actual understanding of pretence. All that happens is it looks at the parts of its training data that talked about pretending to be some role, or pretending to have some skill, and predicts what the response to that looks like, based on all the responses in the millions of documents it was trained on.

Similarly, using the word please and thank you may affect what sources it leans more heavily upon, relying more on documents that used those words, but it doesn’t actually care in the least whether you say please or not - other than in trying to predict what difference in the response it predicts is more likely. So it is more likely to draw from conversations, reddit threads, forums, perhaps, and far less likely to draw from technical writing, manuals, science papers, etc.

This matters with stuff like plagiarism. It understands the context of the word, but doesn’t truly understand the concept itself. The entire system is designed and built to emulate the writing of others. Where possible, it tries to avoid any kind of plagiarism by default, simply because that protects the owners from copyright infringements. But it can’t look up every document to ensure it isn’t like any of them.

Instead of giving it other people’s work and asking it to rewrite it without plagiarism, just ask it to generate an article from scratch. It takes it no longer, and uses no difference in processing power. Just include in your prompts things you want it to include in its output.

1 Like