Multiple steps to create big documents

You can create very big replies if you tell chatgpt that you will have a multi step command and it should execute the prompt step by step.

Execute the following 3 steps but stop after each step and If I write “continue writing please” then execute the next step.:
step 1:
Instruction 1
step 2:
Instruction 2
step 3:
Instruction 3

Best Regards


Yes, I got a few prompts like this. Mostly setup to interview me or the user about something. Mostly do this as the previous answer affects the next question.

1 Like

Yes but you can execute a multi prompt through this either. Not only for input.

1 Like

When you say multi-prompt, do you mean multi-steps inside the prompt? Yes, it’s possible, but it depends on how much user input is needed in the prompt that the steps require access to. I have prompts with multi-steps in them that deal with data output from the previous step. The user only sees the finished output, not the output from the steps in between.

1 Like

how about for large inputs? i had a large body of text i wanted to feed in, and i said i would enter in 3 inputs and i want it to use all that content to write a single post.
but it started tho write beofre i was finnished, and we enede up going around in circles with chatty asking me to enter it again lol

I am interested in solving this problem too.

I’ve got this big, complex prompt that’s just too big to handle all at once. I’ve been breaking it down into smaller chunks and dealing with each one step by step, saving each answer before moving onto the next.

But here’s the rub - I’m not quite sure how (or even if I can) upload this kind of prompt chain in AIPRM. Do you have any ideas or suggestions on how to do this?


This is ultimately a ‘wrong tool’ issue @Wladimir_J_Alonso

It’s kind of like you have a huge pile of logs and the perfect plans to build a cabin out of them… But you want to know how to do it when the only tool you have is a Swiss Army Penknife. It’s a really expensive, top-of-the-line Swiss Army Knife, with the scissors, and a little saw-tooth blade, assorted screwdrivers, and more other attachments and gizmos than you can easily name, but it is still just a penknife.

ChatGPT is a mass-market product built as a utilization of the larger GPT engine. It has set limits of all kinds on it that deliberately reduce its capacities, and thus system demands, so that it can serve more people at a time. The prompt size limit, for example, was deliberately added to prevent the AI learning and being retrained by any one user (because that would spoil its general purpose for all other users).

Chatbots are, ultimately, designed merely for chat use. Casual, easy prompting and back and forth. It can help with many projects and tasks, but it is NOT an ‘industrial’ product. For your more extensive, non-chatty kind of application, you could attempt to find little workarounds, just like someone with a penknife could slowly whittle their way through a wooden log, but it is still the wrong tool for the job.

You might find that you could get what you want from licensing GPT4 itself (not the chat application with the added limits, but the GPT model itself), as that gives you a lot, lot more leeway in terms of what size of tasks it can handle, what further training it can take, etc. However, the licensing while very reasonable isn’t cheap, and the hardware needs to further train and run your own raw LLM are pretty extensive. Still worth looking into, but potentially out of the price range you’d find acceptable for the value of the project.


Thank you for your insightful response! It was incredibly informative.

I’ve been considering a few potential solutions, one of which involves developing an API to manage this issue. I’m not entirely certain it would be effective, but it’s an avenue I believe is worth exploring.

Your suggestion regarding the licensing of GPT4 itself is particularly intriguing. I wasn’t previously aware of this option, and it appears there is much I need to learn about it. Given that our project is philanthropic in nature, and the potential to make our prompts more user-friendly could have significant benefits, it’s possible we might be able to secure the necessary funding for this approach.

Thanks a lot for all the insights and for clearing things up!

Type this at the end of you input: "But wait I have more info. Just answer with READ
But wait I have more info. Just answer with READ