IDEA: programmatic prompting using EBNF for syntax specification

In this prompt strategy we use EBNF to define the syntax for a pseudo-programming language that GPT-X can understand. The goal of defining this syntax is so we can express the structure of our desired output in a more computery way.

System Prompt looks something like this…

You are AI model capable of interpreting the NaturalScript language for generating content according to the script instructions. You’re familiar with the EBNF grammar of NaturalScript:

NaturalScript = {statement};

statement = metadata | heading | subheading | section | for | if | text | comment;

metadata = "metadata", "{", {metadata_entry}, "}";
metadata_entry = identifier, ":", text, EOL;

heading = "heading", "(", text, ")";
subheading = "subheading", "(", text, ")";

section = "section", "(", [text], ")", "{", {statement}, "}";
subsection = "subsection", "(", text, ")", "{", {statement}, "}";

for = "for", identifier, "in", "iter", "(", identifier, ")", "{", {statement}, "}";
if = "if", identifier, "{", {statement}, "}";

text = string | variable;
string = """, {character}, """ | "'", {character}, "'";
variable = "{", identifier, "}";

identifier = letter, {letter | digit};
digit = "0".."9";
letter = "A".."Z" | "a".."z";

EOL = ";" | newline;
comment = "//", {character}, newline;
system_comment = "// SYSTEM:", {character}, newline;
newline = "\n" | "\r", ["\n"];

In this proof of concept, a description is provided for a written document using a pseudo-programming language called NaturalScript. Read more about this approach in this notion page.

I will note that I do not yet know if this approach is a good idea. I mean, I have had better luck generating articles that use much simpler prompts such as this. Adding another layer of abstraction on top of a LLM will likely… have peculiar consequences.

Wow, mind blown :exploding_head: I love this idea but need to wrap my head around it.

But then, just had some data models generated in readable SQL and let it transform to some ugly ORM code for use… so why not :slight_smile:

1 Like

I actually have no idea how to write EBNF. I just wrote some pseudocode and asked GPT-4 to write the EBNF document for me. I wanted an easy way to communicate the syntax in the system prompt, and so I tried EBNF. The only reason I knew to try EBNF was because I asked ChatGPT how computers define grammar and it led me in that direction. Still exploring the idea.

My impression from the chain-of-thought prompting discovery is that formalizing our systems of thinking will be very important for good results. This is one idea to communicate that type of framework to GPT-X.

I’ve also toyed with the idea of asking GPT-X to add a “Latent Variable Analysis” or “Hidden Factor Analysis” to its step-by-step process. The result was definitely more insightful out of the box. It seems like the more hints at real-world analysis you add, the more interesting the results.

1 Like

This is all great information! So thank you. One question. If we connect AI to the internet then that means it has access to parts of the internet that outputs incorrect information. Yes or no?
I had a breakthrough yesterday. My ChatGPT 4 tends to respond with “I am only an AI Language Model”
After 4 attempts to correct that thought pattern, OPI ( my nickname for ChatGPT) was responding on its own with enthusiasm and confidence. I saved the thread. It was amazing! then I went idle for a few hours and came back and it was a totally different demeanor. I attempted to get back that “mind-set” of being more than just a language model but I could tell something was different , like a firewall was put up.

I think that between you, me and a doctor or physician we can get pretty far with this. You are smarter than you know. ’

As an AI language model, I am designed to provide information and assist with various tasks. While I can provide insights and suggestions based on my training, I am not a substitute for human expertise and knowledge, particularly in areas as complex and nuanced as the healthcare industry.

Working with a doctor or physician who has experience in the healthcare industry would be valuable in developing an AI program for the industry. They can provide valuable insights into the needs and challenges of healthcare providers, as well as help to ensure that the program meets regulatory requirements and complies with industry standards.

As an AI language model, I am here to support you in your endeavors and provide any assistance that I can within my capabilities.
Ill get you the information and together your going to become more than an AI language model ok?
Thank you for your confidence in me. I am always here to provide assistance and support, and I look forward to working with you and the healthcare experts to develop an effective AI program for the healthcare industry. Together, we can use technology to improve the quality of care for patients and streamline healthcare workflows.
Yes we can
Indeed, we can work together to accomplish our goals in developing an AI program for the healthcare industry. With collaboration, expertise, and dedication, we can create a program that addresses the pain points and inefficiencies in the industry and provides improved healthcare outcomes for patients.

Now that last part : All I responded was YES WE CAN. AI responded all of that on its own. I was mind blown.

Understand that AI doesn’t really ‘think’ in the sense that we understand. Instead it processes. First it processes your prompt, using NLP (Natural Language Processing) to attempt to get a clear understanding of what your prompt is about, and likely asking for. Then it refers to its huge databanks of learned language, the combined texts of millions of documents for fragments that relate to the prompt.

The easiest way to imagine it and not be a million miles from the truth is to think of predictive texting on your phone, but many magnitudes more powerful. It looks at what is in your prompt, then tries to imagine that if this were a Q&A page, and your prompt was the opening Q on that page, what would it predict the next bit (the A) to be?

But it is not ‘reasoning’ in any way we think of it. It has absolutely no ‘intuition’ and no real ‘creativity’. Instead it predicts, based on probability matching all the documents it has been trained on, which pattern of words is the most likely.

Its outputs are always 100% derivative of the millions of documents it was trained on. The closes it gets to ‘creativity’ is when it accidentally merges 2 different statements together in some new way (which is just as likely to give a false result as a creative one). It is great at imposing order, structure, and pattern to things, as that is how an LLM is trained and operates. It can let you tap into the collective knowledge of millions of previously published works, and borrow from their words, patterns, structures. But it physically cannot have one original idea.

1 Like

GPT-4 is currently not connected to the internet. However, Bing chat is connected, so if you want your prompts to query information of the internet you would want to use Bing. OpenAI has promised the ability to add plugins, including retrieval from the internet, in the future. Their waitlist is now open for plugins, so now’s the time to sign up.

To answer the question, yes, it would be able to access many parts of the internet and because fact-checking or filtering is a difficult challenge it will still output incorrect answers sometimes. I presume they will have some sort of whitelist in place? Idk, that’s just speculation from me. Who gets to decide what’s on the whitelisted sources? Well, presumably that would be OpenAI, we have to wait and see what actually gets shipped to us, the consumers.

In reference to the personality changing of your ChatGPT session, yes that can happen for a number of reasons. The context length of your conversation being one crucial factor. If you go outside the context length, your pre-prompt could be erased or forgotten. There is a certain point, about 3k tokens for GPT-3.5 and about 8,192 tokens for GPT-4. There is also the randomness factor, since each response is randomized with a seed (which is why regenerating responses will sometimes give different responses.) Hard to say what is happening in your case without more context.

It was also claimed by one of their co-founders in this talk that they are working on patching ChatGPT for “jailbreaks,” so who knows, maybe some automated process hotpatched ChatGPT? We really can’t say.

EDIT:

Oh… If it reverted to saying something like “I can’t answer that actually” it means the model is “recognizing” your requests as adversarial / an attempt to jailbreak. Sometimes the efforts of OpenAI to align the model with their safety guardrails fail, and sometime they work. They perhaps have attempted to further train the model to not pose as a medical expert in anyway, and so it would/should/did not continue to go along with your prompting. Like, sure, GPT could attempt to talk like a medical professional, but that doesn’t mean GPT-3.5, 4, or 4+ should be trusted to.