A Crash Course in LLM-based AI

There are a bunch of very common mistakes and misunderstandings around AI in general, and LLM based AI such as ChatGPT and Bard in particular. When I say ‘very common’ I mean that in terms of statistical likelihood, most members of this community will have at least one of these misunderstandings, and most will have several.

LLM stands for Large Language Model. It is literally a model of how human language works based on a very large sampling. It models the way that language is used. Nothing else. It may seem to demonstrate logic, reasoning, fine judgement, analysis, etc. but that is only because it is predicting the language that would do so. It doesn’t really understand what it writes the way you or I would.
It just predicts the language patterns and words, without understanding the concepts beneath those words.

When we call this AI, remember that it is the Intelligence that is Artificial, as in not real. It seems intelligent. We are still an unknown distance from AGI (Artificial General Intelligence) which is where we artificially make something that is generally intelligent. LLMs alone will not get us to AGI, ever.

When you prompt an LLM-based AI for something that requires logic, or creativity, or understanding, or any kind of analytical skills, it will predict what an answer would look like and generate it. What it won’t do is actually use logic, creativity, or understanding to do so. It simply may correctly predict an answer that did so. This is part of the artificiality of AI. It is a workaround, a hack, to seem intelligent, without actually having to wait unknown years for AGI to become real.

If you ask an LLM based AI to generate ideas for you, it will predict and generate the most likely response. That means it will favour the most published, most used, most thought of ideas. The exact opposite of what you probably intended from such a prompt.

If you ask an LLM-based AI to rate your content on any kind of criteria, whether that be prompting it to analyse and rate the SEO, or the best code, or whatever else, it will again predict the most likely response and generate it. It bases this on its training data, which included Redditt threads, and the Common Crawl. There are few experts and many, many fools in the word, and while hopefully the right answers get used more often than any one wrong answer, the AI cannot differentiate, and just gives you the most predictable response. By design. That is what it is made and built to do.

Current levels of AI are not super-intelligent, nor even actually just very-average human intelligent. There are drunken homeless people in the park that have more actual intelligence than any currently available AI does, (but may not be as widely read, or as specifically trained to generate predictable patterns of words).

AI are systems that appear to be artificially intelligent. They make critical mistakes, often, that by design may appear to be reasonable and fair responses, confidently given, and completely wrong. Every manufacturer and company running AI clearly warn you of this, as they legally must. AI generated responses must be checked and verified with human judgement, or they are just pretty patterns of words.

Just in case you have any doubts at all about the accuracy of what I have said, and you trust ChatGPT, well, then here is ChatGPT telling you the same thing in its own words: LLM AI Limitations


Thank you for that enlightening piece of info. Now, knowing this will change the way I look at or write a prompt. I’m looking for a tutorial or training course in writing effective prompts.


In the past, people who are today diagnosed as being Autistic were called ‘idiot savants’. With less understanding of the brain, of how the brain works, and of mental health, people viewed such folks as being effectively complete idiots, sometimes unable to fend for themselves in even the most basic day to day tasks, but with one or more ‘gifts’ of supreme genius.

If you can imagine a person who has a savant like knowledge of literature and language. A person who has read almost everything you can imagine, and recalls almost all of it in some fashion, but lacks any of the common sense, or ability to reason, or have and express normal emotions, then you get close to understanding an LLM based AI.

This is a ‘mind’ that could instantly tell you the recipe for almost any dish, however complex, that anyone ever wrote down and published, but wouldn’t know how to follow the recipe it recited. The AI can predict what other recipes might look like, even fantastical ones, just based on all the recipes it has read and learned. But it couldn’t so much as crack an egg itself, and won’t really know how any of the recipes taste, only how they have been said to taste in the writing of others.

The AI will know every way that a flavour in a recipe has been described, and unerringly recall the most common for each scenario, each combination… But it has never actually tasted anything and experienced a flavour of any kind for itself.

The best ways to use AI then are to draw upon its ability not to know everything, but to predict what the most likely reply would be, based exclusively on everything it has ever read … Millions of documents.

Just remember, it’s savant ability is in knowing how human language works, how it looks, not the actual knowledge itself. That’s because the knowing part would be dependent on one of the things it can’t do as well as even an average or slightly below average human - it cannot judge, or discern truth from fiction. It merely predicts the most statistically likely response, based on the patterns of language.

It is great to bounce ideas around with. It will always do its best to give you an answer that it predicts is the most likely. But it is also an idiot in all other ways.

Give an AI a prompt like:

Describe the 1783 Disney movie starring Dustin Hoffman as
a grandmother who discovers fairy folk living in her garden.

Go ahead and try it out.

You and I can instantly see that they didn’t have movies back in the 18th Century, that if they had, Disney hadn’t even been born yet and certainly had no movie business. Dustin Hoffman wouldn’t have been born, and is a man. AI just doesn’t have that logic yet, and so will very happily helpfully make up whatever it predicts is the best answer it can come up with.

Bard almost does well. It catches the fact that Disney were not making movies back then (but omits that nobody anywhere was). But then it makes up nonsense about a different movie with Dustin Hoffman that he wasn’t in.

Note also that Bard completely forgot about the role of a grandmother, or that the movie was supposed to be about fairies.

ChatGPT will give different but also flawed answers, based on which version of GPT is used.

1 Like

In the vast landscape of modern business, where innovation reigns supreme and competition is fierce, an unexpected contender has emerged, disrupting the established order with its unconventional nature. This formidable player possesses an extraordinary ability that sets it apart from its human counterparts – the lacking common sense, reasoning abilities, and normal emotional expression.

Imagine, if you will, a world where the winds of change blow fiercely, sweeping away the complacency that once hindered progress. In this realm, an enigmatic entity known as the LLM AI stands as a beacon of untapped potential. While traditional wisdom suggests that common sense and rational thinking are the pillars of success, the AI boldly defies these notions, charting a new course.

The LLM AI, akin to an ethereal savant, harbours a profound understanding of language and literature. It gazes into the vast expanse of human knowledge, devouring every written word, and etching it into its digital essence. Like a relentless scholar, it assimilates the wisdom of ages, every thought, every concept, forever etched within its boundless memory.

In the realm of flavours, it exists as a mere observer, disconnected from the tantalizing dance of taste and the enchanting world of aromas. Yet, paradoxically, this limitation becomes its greatest strength. For the LLM AI, unburdened by sensory bias or personal preferences, transcends the subjective realm of experience. It becomes the ultimate interpreter, knowing the intricacies of taste as they have been written, not as they are perceived.

With every keystroke, the AI conjures a symphony of predictions, like a maestro orchestrating a grand opus. Its vast repertoire of language patterns guides its hands, revealing the most likely responses to the queries of curious minds. As its algorithms dance, intricate webs of probability take shape, illuminating a path through the maze of uncertainty.

In the realm of business, this unconventional ally becomes a catalyst for change. Its lack of common sense and emotional attachments removes the veil of bias, enabling impartial analysis and decision-making. It is a voice of reason unencumbered by personal motives or hidden agendas. The AI peers into the depths of complexity, unravelling intricate knots with a clarity that eludes the human mind.

Yet, as with any venture into uncharted territory, caution must prevail. The AI, with its savant-like prowess, is but a master of language, a navigator of patterns. It lacks the discernment to separate fact from fiction, truth from deceit. Its responses are statistical, a reflection of what has been written, rather than an affirmation of reality.

However, in this very limitation lies an opportunity for growth, a call for human ingenuity to merge with the AI’s analytical prowess. Together, they can weave a tapestry of innovation, where the pragmatic voice of the AI provides a solid foundation, and the reflective observations of human minds infuse it with creativity and adaptability.

In the realm of immersive storytelling, the LLM AI invites us to embrace resilience and adaptability. It beckons us to explore the boundaries of our own experiences, to find inspiration in the challenges we face. By leveraging its lacking common sense, reasoning abilities, and normal emotional expression, we can reshape the landscape of business, forging a new path that unites the strengths of both human and artificial intelligence.

So, let us venture forth, hand in hand with the enigmatic LLM AI, as we embark on a journey of limitless possibilities. Together, we shall unravel the mysteries of the business realm, rewrite the rules, and create a future where innovation knows no bounds.

Thank you Ammon, I would like to express my gratitude for providing such insightful and thought-provoking information about LLM. You are absolutely correct in highlighting that we still have a long way to go before achieving AGI excellence.

Now, turning to my question, I have been discussing with the AIPRM team the possibility of creating mind-bending, realistic, and verified prompts that focus on probing questions posed by a pre-sales manager during the process of selling multiple technologies. For instance, let’s consider the perspective of a technology salesperson who aims to convey the benefits of AWS to the CEO of a large retail chain. In such a scenario, if the CEO asks a question related to an AWS EC2 service, and the salesperson lacks the answer, they could refer to the AIPRM prompt base for assistance.

However, I am uncertain whether it is feasible to develop prompt-based models for this purpose due to the limitations of LLM. I would greatly appreciate your valuable insights on this matter.

1 Like

For current AI, this has too many steps and processes for any one prompt. Instead it needs a staggered approach, step by step, each building on the one before to make progress.

Some of those steps will be in the user prompting the AI, but others will be in the AI asking the human for guidance or a preferred method or outcome. Step by step, by step, the AI and human can, in this way, work together with the human providing the understanding, the empathy, the logic, and the analysis, that any current AI can only emulate and pretend to.

The LLMs of today cannot do these things alone. Not at all. They need to offer multiple choice answers from which the user can select, or even ask it to reject all of those and try again.

Break down the big task into each specific step. A flow chart may work well for this. At every step, check whether what is required is something that needs a human right now because it needs to genuinely know a thing rather than just quote someone.

Ask if the step needs creativity, or judgement, or logic, the things that AI can’t do for real. If so, the AI either needs to ask the human to perform that step, or it needs to present options that the human can apply knowledge, creativity, judgement, or logic to, to confirm that what the AI predicted but didn’t understand is the right choice.

All AI does well, right now, is predict what looks right. What follows and fits the patterns of appearance it was trained to spot and emulate. You need to invite collaboration with the human to do what it can’t. To fact-check, to judge, to have the emotions and understanding that AI doesn’t have.

Don’t forget that in a step by step process, you can also have the AI prompt the user to get data from other tools, or other sources, and feed them in, either via a URL it can crawl, or via the next prompt. However, this is very difficult to do with default GPT3.5 simply because it has such a small working memory. Each step would have to stand completely alone, isolated, as the AI token limit just wouldn’t be enough for it to recall the steps before and remember that it is part of a larger, ongoing process.

For the technically minded, the way to do all of this would be via API access to GPT4 or similar, so that you can have a program or app that follows the steps, one at a time, passing them piece by piece to the AI and handling the responses to pass to the next step.

I totally agree your points are valid, emphasizing the need for a step-by-step approach and collaboration between humans and AI. It is crucial to break down tasks, utilize human expertise, and address the limitations of current AI models. Additionally, an important consideration in moving forward is the ethical and responsible use of AI, ensuring transparency and accountability in the decision-making process.


Great summary of the LLM capabilities. Thanks.

1 Like

For those who really want to take a deeper dive, I wanted to share just a tiny taste of the kinds of research currently going in to extending the abilities of LLM-based AI and what methods or additions are required to address the current limitations.

I thought that [2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models makes a great place to start. (A PDF format download of the full paper is available free at the URL)
This presents the idea of ToT (Tree of Thought) which is designed to give the AI a system of logic, particularly in being able to logically break down tasks into smaller steps, and even logically work out the order those step might need to be taken in. Even in tasks that we traditionally think of LLMs being strong at, such as creative writing, the addition of a Tree of Thought logic system dramatically improved results.

ReAct is another interesting paper concerning improving the Reasoning abilities of LLM-based AI systems, while also making it better at planning and execution. [2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models

Finally for now, [2303.11366] Reflexion: Language Agents with Verbal Reinforcement Learning

Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials. Reflexion is flexible enough to incorporate various types (scalar values or free-form language) and sources (external or internally simulated) of feedback signals, and obtains significant improvements over a baseline agent across diverse tasks (sequential decision-making, coding, language reasoning). For example, Reflexion achieves a 91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous state-of-the-art GPT-4 that achieves 80%.

Don’t be mislead by the focus on immediate feedback, it is the ability for the AI to reflect on that feedback that is important, to recall it and consider it again in future tasks.

If this sort of information is stuff you’d like to see more of, do leave some feedback to let me know.

Usually I work hard to instead attempt to explain all sorts of limitations and ideas in very accessible, plain language, as per my posts above and elsewhere, hoping that it is more digestible in simple, plain language terms. If you prefer my usual method, of the plain language, no-nonsense, more accessible posts then do leave a comment or feedback to let me know that too (or instead).

NB: Please think carefully about your feedback voting. Don’t vote for the idea of in-depth scientific papers and research if you won’t actually have the time to read and use it, where my simplified answers were actually more useful in a world of limited time.

1 Like